FNA Edge Network Appliance: Operations Guide
FNA Edge Network Appliance: Operations Guide
OPERATIONS GUIDE
Apr 2022
Copyright & Trademarks
© 2022 Meta. All rights reserved.
Thank you for choosing to install the FNA Edge Network Appliance (FNA)!
FNA is Meta’s content caching program. FNA provides Internet Service Providers (ISPs) with an efficient
means of delivering static Meta content from within their network. Upon deployment, an ISP will offload a
significant amount of Meta content from its backbone network and vastly improve the Meta user
experience.
An FNA cluster consists of a Top-of-Rack (ToR) switch and from two to twenty servers. The hardware is
suitable for deployment in data centers, colocation facilities, and outside plant environments (the industry
standard 19-inch form factor allows integration into most network environments).
This document provides hardware specifications, instructions on adding cluster capacity, and maintenance
instructions.
You can also review a list of Frequently Asked Questions, available in the NPP portal https://fb.me/npp, at
the bottom of the Support section.
For all shipments, ensure that you have received all of the hardware required. If any equipment is
missing, contact your installation vendor for assistance.
(1 per server) For 10G uplink, SFP+ optical transceiver, LR or SR2 N/A
(2) For 100G uplink, QSFP optical transceiver, LR4 or SR42 N/A
1
Model numbers vary per cluster.
2
Uplink capacity is selected for every order, either with 10G or 100G ports. Optical transceivers
will be sent according to the port type selected. Meta provides transceivers only for the FNA, while
optics for the ISP devices must be provided by the ISP.
The following table shows equipment that is included in an FNA order that is used for adding additional
servers to a cluster. This upgrade kit does not include a switch. However, the order includes optical
transceivers for additional uplink capacity. Use the existing switch at your FNA deployment.
The quantity of servers you receive depends on your current and forecasted Meta traffic volume and
are shipped in groups of two. The equipment will arrive at your facility in one shipment. FNA’s
installation vendor will contact you with shipping information.
(1 per server) For 10G uplink, SFP+ optical transceiver, LR or SR2 N/A
(2) For 100G uplink, QSFP optical transceiver, LR4 or SR42 N/A
1
Model numbers vary per cluster.
2
Uplink capacity is selected for every order, either with 10G or 100G ports. Optical transceivers will be
sent according to the port type selected. Meta provides transceivers only for the FNA, while the optics
for the ISP devices must be provided by the ISP.
The following table shows additional equipment required for the FNA installation that is not provided
by Meta.
QTY Item
(1 per uplink port) Fiber patch cables from Arista DCS-7060CX-32S-R-DC Switch to ISP router
2
Fiber connectivity on the FNA side is the Lucent Connector (LC) type. Fiber connectivity on the
router side can be LC-LC, LC-SC, (Lucent Connector – Subscriber Connector), etc.
Check the optical light levels to ensure that the cluster has adequate signal strength. Light levels need
to be between -2 dB and -7 dB.
In cases where light levels are outside the normal range (i.e., higher than -2dB or lower than -7dB), see
Table 4: Signal Level Troubleshooting to resolve the issue.
Above -2 The signal is too strong. This could be Fix with your normal operating procedures
Caused by many reasons. For strong signals.
-21 to -40 • All cables are connected to their • Check that the port connection is
proper ports, but the ports are enabled.
In cases where light levels are outside the normal range (i.e., higher than -2dB or lower than -7dB), see
Table 4: Signal Level Troubleshooting to resolve the issue.
Each server and switch have LEDs that can be used to diagnose the FNA for issues. The following table
describes the behavior and equipment status indicated by these LEDs.
Off The server is powered off or the power supply has failed.
Power LED
On, Steady Green The power supply is on.
Link (LNK) On, Steady Green A connection exists between the server and the network.
Flashing amber The drive is a member of one or more logical drives and
/ green predicts the drive will fail.
Flashing amber The drive is not configured and predicts the drive will fail.
Parameter Specification
Frequency 50-60 Hz
Operating Temperature
Range 10°C to 35°C
Parameter Specification
22.1A @ 40 VDC
Nominal Input Current (A) 18.2A @ 48 VDC
12 A @ 72 VDC
874W @ 40 VDC
Maximum Rated Input Wattage Rating (W) 865W @ 48 VDC
854W @ 72 VDC
2983 BTU-Hr @ 40 VDC
Maximum British Thermal Unit Rating (BTU-Hr) 2951 BTU-Hr @ 48 VDC
2912 BTU-Hr @ 72 VDC
13.1 – 7.3A
Typical Input Current 6.3 – 2.3A
11A at -48V
Environmental Characteristics
7. Hardware Specifications
Refer to the specifications for the equipment included in your FNA kit (this can vary according to when the
order was placed).
Parameter Specification
Hewlett Packard ProLiant DL380p G10
Dimensions (Height x Width x Depth) 8.73 x 44.55 x 73.02 cm (3.44 x 17.54 x 28.75 in)
Weight (Approximate) 14.9 kg (32 lb)
Arista DCS-7060CX-32S-R Switch
Dimensions (Height x Width x Depth) 8.73 x 44.55 x 73.02 cm (1.8 x 19 x 16 in)
Weight (Approximate) 9.5 kg (21 lb)
Feature Description
• Non-operating
• -30° to 60°C (-22° to 140°F)
Maximum rate of change is 20°C/hr (36°F/hr)
• Operating
• 8% to 90% - Rh
• 28°C (82°F) maximum wet bulb temperature
• Non-condensing
Relative humidity (Rh) •
• Non-operating
• 5% to 95% - Rh
• 38.7°C (101.7°F) maximum wet bulb temperature
Non-condensing
• Operating
• 3050 m (10,000 ft)
• This value may be limited by the type and number of options
installed
• Maximum allowable altitude change rate is 457 m/min (1500
Altitude ft/min)
•
• Non-operating
• 9144 m (30,000 ft)
Maximum allowable altitude change rate is 457 m/min (1500
ft/min)
• Idle
• LWAd
• 4.7 B entry
• 4.9 B base
• 4.8 B perf
• LpAm
• 31 dBA entry
• 34 dBA base
• 33 dBA perf
Acoustic noise1 •
• Operating
• LWAd
• 4.7 B entry
• 4.9 B base
• 4.8 B perf
• LpAm
• 31 dBA entry
• 34 dBA base
33 dBA perf
Product conformance to cited product specifications is based on sample (Type) testing, evaluation, or
assessment. This product or family of products is eligible to bear the appropriate compliance logos and
statements
The listed sound levels apply to standard shipping configurations. Additional options may result in
increased sound levels
Warning: This procedure requires updating the Link Aggregation (LAG) configuration. Updating the LAG
may cause service interruption. If this is the case, it is recommended to drain network traffic first. See
section 13. Draining Network Traffic.
8. Growth Path
This section provides a reference for best practices regarding augmentations. The growth path optimizes
for deployment failover scenarios by reducing the quantity of Single Points Of Failure (SPOF).
Note: When building an FNA cluster beyond eight servers, it is best practice to install the additional servers
in a new rack and maintain equal size clusters (as depicted in growth path configuration 3 and 4).
ii. Rack the additional servers, connect the peripheral cables (power and network), and power on
the servers. Refer to the document Quick Start Guide from the Documents section of NPP
iii. For clusters over (4) servers, follow the port assignments as illustrated. Other port maps are
available in the Quick Start Guide
ii. Connect the new port in the FNA switch to the ISP router as instructed in figure 3
iii. Update the ISP router configuration to add the new port to the LACP interface.
FNA switch is pre-configured, therefore, this process usually does not require intervention from Meta.
however, to upgrade from 10G to 100G please create a Support ticket in the NPP asking for assistance.
Note: 10G to 100G upgrade is only applicable for FNA caches with an Arista switch. Older FNA caches with
Cisco switches only support 10G ports.
After the equipment has been in use for a few years, it might be necessary to replace the switch and/or
i. We request you Not To power down or replace any of the hardware as we must access the
servers to prepare the cluster for refresh.
ii. The decommissioning of the old servers may take up to 6hrs so request you to arrange site tech
accordingly
If you are going through this process, please review the FNA Refresh Guide, available in the NPP portal
Documents section.
If you don’t have access to the NPP, you may instead provide the following information in an email:
Item Detail
To [email protected], [email protected]
Note: Update items between brackets, these are unique to your deployment.
Before powering down the FNA cluster for maintenance, all network traffic must be properly drained. This
procedure describes how to properly drain traffic from the cluster.
Important: This procedure requires withdrawing prefixes. It is important that the Border Gateway Protocol
(BGP) peering session is maintained while withdrawing prefixes.
i. Withdraw prefixes. While maintaining the BGP peering session, begin withdrawing BGP prefixes.
ii. Traffic will begin to drain. Traffic will fully drain in less than one hour.
You should shut down the FNA only if it is absolutely required and traffic has been properly drained from
the system (see 13. Draining Network Traffic). This should be a rare occurrence. To shut down the FNA,
follow the procedure outlined below or create a task in the NPP:
i. Connect a monitor and keyboard to the server you want to power off.
ii. Use the arrow keys and navigate to the Shut Down text.
iii. Press the [Enter] key. The FNA will power off.
Meta regularly monitors internet reachability from various simulated endpoints. When reachability issues
are detected on these Virtual IPs (VIPs), Meta may drain traffic from an FNA cluster. This measure ensures
that Meta maintains a high quality of service for the Meta platform.
Important note: Changing the FNA caching prefixes (the /26 IPv4 or /64 IPv6) takes between 24 to 48hs to
be in effect in our systems. Please note that if you are requesting this change, the cache will be drained with
no traffic between 24 to 48hs until traffic returns to normal levels. On the other hand, requesting changes
to P2P or BGP IP addresses as well as updates to your ASN number usually only take 2hrs or less to be in
effect.
To do so,
i. First make sure that the ASN has been added to your organization profile in NPP. You can check
this in the Settings section (cog wheel icon). You can find more details about this and about how to
request the addition of a new ASN in the Portal User Guide, available in the NPP portal Documents
section
ii. Once the new ASN is already available, create a new Support ticket in NPP requesting the AS
number change.
As this may be indicative of prefix leakage, we will inform you via a Direct Support ticket in the Network
Partner Portal when you have reached 90% of these thresholds. Should you cross either of these
thresholds, traffic will be drained from your cluster. To prevent this, please ensure your BGP
advertisements are below the defined thresholds. Should you have a reason to cross this, please contact
[email protected] or open a ticket in the NPP, Support section.
You can find out more in the BGP Community Signaling Guide, available in the NPP portal Documents
section