O'Reilly logo

Junos Security by James Quinn, Timothy Eberhard, Patricio Giecco, Brad Woodberg, Rob Cameron

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Data Center SRX Series

The data center SRX Series product line is designed to be scalable and fast for data center environments where high performance is required. Unlike the branch products, the data center SRX Series devices are highly modular—a case in point is the base chassis for any of the products, which does not provide any processing power to process traffic because the devices are designed to scale in performance as cards are added. (It also reduces the total amount of investment that is required for an initial deployment.)

There are two lines of products in the data center SRX Series: the SRX3000 line and the SRX5000 line. Each uses almost identical components, which is great because any testing done on one platform can carry over to the other. It’s also easier to have feature parity between the two product lines since the data center SRX Series has specific ASICs and processors that cannot be shared unless they exist on both platforms. Where differences do exist, trust that they will be noted.

The SRX3000 line is the smaller of the two, designed for small to medium-size data centers and Internet edge applications. The SRX5000 line is the largest services gateway that Juniper offers. It is designed for medium to very large data centers and it can scale from a moderate to an extreme performance level.

Both platforms are open for flexible configuration, allowing the network architect to essentially create a device for his own needs. Since processing and interfaces are both modular, it’s possible to create a customized device such as one with more IPS with high inspection and lower throughput. Here, the administrator would add fewer interface cards but more processing cards, allowing only a relatively small amount of traffic to enter the device but providing an extreme amount of inspection. Alternatively, the administrator can create a data center SRX with many physical interfaces but limited processors for inspection. All of this is possible with the data center SRX Series.

Data Center SRX-Specific Features

The data center SRX Series products are built to meet the specific needs of today’s data centers. They share certain features that require the same underlying hardware to work as well as the need for such features—it’s important to be focused on meeting the needs of the platform.

The first such feature is transparent mode. Transparent mode is the ability for the firewall to act as a transparent bridge. As a transparent bridge, the firewall routes packets by destination MAC address. Firewall policies are still enforced, as would be expected. The benefit of such a transparent firewall feature is that the firewall can easily be placed anywhere in the network.

Note

As of Junos 10.2, transparent mode is not available for the branch SRX Series products.

In the data center, IPS is extremely important in securing services, and the data center SRX Series devices have several features for IPS that are currently not available for the branch SRX Series devices. Inline tap mode is one such feature for the data-center-specific SRX platform, allowing the SRX to copy any off sessions as they go through the device. The SRX will continue to process the traffic in Intrusion Detection and Prevention (IDP), as well as passing the traffic out of the SRX, but now it will alert (or log) when an attack is detected, reducing the risk of encountering a false positive and dropping legitimate traffic.

Because most data centers have a large amount of hardware at their disposal, most have the capability to decrypt SSL traffic for inspection. On the SRX, the organization’s private SSL key can be loaded and the SRX can then decrypt the SSL traffic in real time and inspect for attacks. This provides an additional layer of security by eliminating attacks that could simply slip through in encrypted streams.

Note

The branch SRX Series products do not have SSL decryption capability, mostly because of the horsepower needed to drive it.

Another specific feature that is common to the data center SRX Series is that they can be configured in what is known as dedicated mode. The data center SRX Series firewalls have dense and powerful processors, allowing flexibility in terms of how they can be configured. And much like adding additional processing cards, the SRX processors themselves can be tuned. Dedicated mode allows the SRX processing to be focused on IDP, and the overall throughput for IDP increases, as do the maximum session counts.

Note

Because the branch SRX Series products utilize different processors, it is not possible to tune them for dedicated mode.

Another specific data center feature of note is the AppDoS feature. When the SRX is deployed in a data center it is designed to protect servers, and one of the most common attacks of the modern Internet era is the DDoS attack. It is extremely difficult to detect and stop these types of attacks, but using IDP technology it’s possible to set thresholds and secure against these attacks. The AppDoS feature uses a series of thresholds to detects attacks and then stop only the attacking clients and not the valid devices.

Note

Because the branch SRX Series isn’t focused on protecting services, the AppDoS feature was not made available for that platform (as of Junos 10.2).

We cover many of these features, and others, throughout this book in various chapters and sections. Use the index at the end of the book as a useful cross-reference to these and other data center SRX Series features.

SPC

The element that provides all of the processing on the SRX Series is called the Services Processing Card (SPC). An SPC contains one or more Services Processing Units (SPUs). The SPU is the processor that handles all of the services on the data center SRX Series firewalls, from firewalling, NAT, and VPN to session setup and anything else the firewall does.

Each SPU provides extreme multiprocessing and can run 32 parallel tasks simultaneously. A task is run as a separate hardware thread (see Parallel Processing for an explanation of hardware threads). This equates to an extreme amount of parallelism. An SPC can operate in four modes: full central point, small central point, half central point, and full flow. SPUs that operate in both central point and flow mode are said to be in combo mode. Based on the mode, the number of hardware threads will be divided differently.

The SPU can operate in up to four different distributions of threads, which breaks down to two different functions that it can provide: the central point and the flow processor. The central point (CP) is designed as the master session controller. The CP maintains a table for all of the sessions that are active on the SRX—if a packet is ever received on the SRX that is not matched as part of an existing session, it is sent to the CP. The CP can then check against its session table and see if there is an existing session that matches it. (We will discuss the new session setup process in more detail shortly, once all of the required components are explained.)

The CP has three different settings so that users can scale the SRX appropriately. The CP is used as part of the new session setup process or new CPS. The process is distributed across multiple components in the system. It would not make sense to dedicate a processor to provide maximum CPS if there were not enough of the other components to provide this. So, to provide a balanced performance, the CP is automatically tuned to provide CPS capabilities to the rest of the platform. The extra hardware threads that are remaining go back into processing network traffic. At any one time, only one processor is acting as the CP, hence the term central point.

The remaining SPUs in the SRX are dedicated to process traffic for services. These processors are distributed to traffic as part of the new session setup process. Because each SPU eventually reaches a finite amount of processing, as does any computing device, an SPU will share any available computing power it has among the services. If additional processing power is required, more SPUs can be added. Adding more SPUs provides near-linear scaling for performance, so if a feature is turned on that cuts the required performance in half, simply adding another SPU will bring performance back to where it was.

The SPU’s linear scaling makes it easier to plan a network. If needed, a minimal number of SPUs can be purchased upfront, and then, over time, additional SPUs can be added to grow with the needs of the data center. To give you an indication of the processing capabilities per SPU, Table 1-11 shows off the horsepower available.

Table 1-11. SPU processing capabilities

Item

Capability

Packets per second

1,100,000

New CPS

50,000

Firewall throughput

10 Gbps

IPS throughput

2.5 Gbps

VPN throughput

2.5 Gbps

Each SPC in the SRX5000 line has two SPUs and each SPC in the SRX3000 line has a single SPU. As more processing cards are added, the SRX gains the additional capabilities listed in Table 1-11, so when additional services such as logging and NAT are turned on and the capacity per processor decreases slightly, additional processors can be added to offset the performance lost by adding new services.

NPU

The NPU or Network Processing Unit is similar in concept to the SPU, whereby the NPU resides on either an input/output card (IOC) or its own Network Processing Card (NPC) based on the SRX platform type (in the SRX5000 line the NPU sits on the IOC and in the SRX3000 line it is on a separate card).

When traffic enters an interface card it has to pass through an NPU before it can be sent on for processing. The physical interfaces and NPCs sit on the same interface card, so each interface or interface module has its own NPU. In the SRX3000 line, each interface card is bound to one of the NPUs in the chassis, so when the SRX3000 line appliances boot, each interface is bound to an NPU in a round-robin fashion until each interface has an NPU. It is also possible to manually bind the interfaces to the NPUs through this configuration.

The biggest difference in the design of the SRX3000 and SRX5000 lines’ usage of NPUs concerns providing a lower-cost platform to the customer. Separating the physical interfaces from the NPU reduces the overall cost of the cards.

The NPU is used as a part of the session setup process to balance packets as they enter the system. The NPU takes each packet and balances it to the correct SPU that is handling that session. In the event that there is not a matching session on the NPU, it forwards the packet to the CP to figure out what to do with it.

Each NPU can process about 6.5 million packets per second inbound and about 16 million packets outbound. This applies across the entire data center SRX Series platform. The method the NPU uses to match a packet to a session is based on matching the packet to its wing table; a wing is half of a session and one part of the bidirectional flow. Figure 1-29 depicts an explanation of a wing in relation to a flow.

Sessions and wings

Figure 1-29. Sessions and wings

The card to which the NPU is assigned determines how much memory it will have to store wings (some cards have more memory, since there are fewer components on them). Table 1-12 lists the number of wings per NPU. Each wing has a five-minute keepalive. If five minutes pass and a packet matching the wing hasn’t passed, the wing is deleted.

Table 1-12. Number of wings per NPU

Card type

NPUs per card

Wings per NPU

4x10G SRX5000

4

2 million

40x1G SRX5000

4

2 million

Flex I/O SRX5000

2

4 million

NPC SRX3000

1

4 million

It is possible that the wing table on a single SPU can fill up, and it is a possibility in the SRX5000 line since the total number of sessions exceeds the total number of possible wings on a single NPU. To get around this, Juniper introduced a feature called NPU bundling in Junos 9.6, allowing two or more NPUs to be bundled together. The first NPU is used as a load balancer to balance packets to the other NPUs, and then the remaining NPUs in the bundle are able to process packets. This benefits not only the total number of wings, but also the maximum number of ingress packets per second. NPUs can be bundled on or across cards with up to 16 NPUs to be used in a single bundle, and up to eight different bundles can be created.

The NPU also provides other functions, such as a majority of the screening functions. A screen is an intrusion detection function. These functions typically relate to single packet matching or counting specific packet types. Examples of this are matching land attacks or counting the rate of TCP SYN packets. The NPU also provides some QoS functions.

Data Center SRX Series Session Setup

We discussed pieces of the session setup process in the preceding two sections, so here let’s put the entire puzzle together. It’s an important topic to discuss, since it is key to how the SRX balances traffic across its chassis. Figure 1-30 shows the setup we will use for our explanation.

Hardware setup

Figure 1-30. Hardware setup

Figure 1-30 depicts two NPUs: one NPU will be used for ingress traffic and the other will be used for egress traffic. The figure also shows the CP. For this example, the processor handling the CP function will be dedicated to that purpose. The last component shown is the flow SPU, which will be used to process the traffic flow.

Figure 1-31 shows the initial packet coming into the SRX. For this explanation, a TCP session will be created. This packet is first sent to the ingress NPU, where the ingress NPU checks against its existing wings. Since there are no existing wings, the NPU then must forward the packet to the CP, where the CP checks against its master session table to see if the packet matches an existing flow. Since this is the first packet into the SRX, and no sessions exist, the CP recognizes this as a potential new session.

The first packet

Figure 1-31. The first packet

The packet is then sent to one of the flow SPUs in the system using the weighted round-robin algorithm.

Note

Each SPU is weighted. A full SPU is given a weight of 100, a combo-mode SPU is given a weight of 60 if it’s a majority flow and a small CP, and a half-CP and half-flow SPU is given a weight of 50. This way, when the CP is distributing new sessions, the sessions are evenly distributed across the processors.

In Figure 1-31 there is only a single SPU, so the packet is sent there.

The SPU does a basic sanity check on the packet and then sets up an embryonic session. This session lasts for up to 20 seconds. The CP is notified of this embryonic session. The remaining SYN-ACK and ACK packets must be received before the session will be fully established. Before the session is completely established, the NPUs will forward the SYN-ACK and ACK packets to the CP and the CP then must forward them to the correct SPU, which it does here because the SPU has the embryonic session in its session table.

In Figure 1-32, the session has been established. The three steps in the three-way handshake have completed. Once the SPU has seen the final ACK packet, it completes the session establishment in the box, first sending a message to the CP to turn the embryonic session into a complete session, and then starting the session timer at the full timeout for the protocol. Next, the SPU notifies the ingress NPU. Once the ingress NPU receives a message, it installs a wing. This wing identifies this session and then specifies which SPU is responsible for the session. When the ACK packet that validated the establishment of the session is sent out of the SRX, a message is tacked onto it. The egress NPU interprets this message and then installs the wing into its local cache, which is similar to the ingress wing except that some elements are reversed. This wing is matching the destination talking to the source (see Figure 1-29 for a representation of the wing).

Session established

Figure 1-32. Session established

Now that the session is established the data portion of the session begins, as shown in Figure 1-33 where a data packet is sent and received by the NPU. The NPU checks its local wing table and sees that it has a match, and then forwards the packet to the SPU. The SPU then validates the packet, matching the packet against the session table to ensure that it is the next expected packet in the data flow. The SPU then forwards the packet out the egress NPU. (The egress NPU does not check the packet against its wing table; a packet is only checked upon ingress to the NPU.) When the egress NPU receives a return packet, it is being sent from the destination back to the source. This packet is matched against its local wing table and then processed through the system as was just done for the first data packet.

Existing session

Figure 1-33. Existing session

Lastly, when the session has completed its purpose, the client will start to end the session. In this case, a four-way FIN close is used. The sender starts the process and the four closing packets are treated the same as packets for the existing session. What happens next is important, as shown in Figure 1-34. Once the SPU has processed the closing process, it shuts down the session on the SRX, sending a message to the ingress and egress NPUs to delete their wings. The SPU also sends a close message to the CP. The CP and SPU wait about eight seconds to complete the session close to ensure that everything was closed properly.

Session teardown

Figure 1-34. Session teardown

Although this seems like a complex process, it also allows the SRX to scale. As more and more SPUs and NPUs are added into the system, this defined process allows the SRX to balance traffic across the available resources. Over time, session distribution is almost always nearly even across all of the processors, a fact proven across many SRX customer deployments. Some have had concerns that a single processor would be overwhelmed by all of the sessions, but that has not happened and cannot happen using this balancing mechanism. In the future, if needed, Juniper could implement a least-connections model or least-utilization model for balancing traffic, but it has not had to as of Junos 10.2.

Data Center SRX Series Hardware Overview

So far we’ve talked about the components of the data center SRX Series, so let’s start putting the components into the chassis. The data center SRX Series consists of two different lines and four different products. Although they all utilize the same fundamental components, they are designed to scale performance for where they are going to be deployed. And that isn’t easy. The challenge is that a single processor can only be so fast and it can only have so many simultaneous threads of execution. To truly scale to increased performance within a single device, a series of processors and balancing mechanisms must be utilized.

Since the initial design goal of the SRX was to do all of this scaling in a single product, and allow customers to choose how they wanted (and how much) to scale the device, it should be clear that the SPUs and the NPUs are the points to scale (especially if you just finished reading the preceding section).

The NPUs allow traffic to come into the SRX, and the SPUs allow for traffic processing. Adding NPUs allows for more packets to get into the device, and adding SPUs allows for linear scaling. Of course, each platform needs to get packets into the device, which is done by using interface cards, and each section on the data center SRX Series will discuss the interface modules available per platform.

SRX3000

The SRX3000 line is the smaller of the two data center SRX Series lines. It is designed for the Internet edge, or small to medium-size data center environments. The SRX3000 products are extremely modular. The base chassis comes with a route engine (RE), a switch fabric board (SFB), and the minimum required power supplies. The RE is a computer that runs the management functions for the chassis, controlling and activating the other components in the device. All configuration management is also done from the RE.

The reason it is called a route engine is because it runs the routing protocols on it, and on other Junos device platforms such as the M Series, T Series, and MX Series, the RE is, of course, a major part of the device. However, although SRX devices do have excellent routing support, most customers do not use this feature extensively.

The SFB contains several important components for the system: the data plane fabric, the control plane Ethernet network, and built-in Ethernet data ports. The SFB has eight 10/100/1000 ports and four SFPs. It also has a USB port that connects into the RE and a serial console port. All products in the SRX3000 line contain the SFB. In fact, the SRX3000 is the only data center line that contains built-in ports (the SRX5000 line is truly modular, as it contains no built-in I/O ports). The SFB also contains an out-of-band network management port, which is not connected to the data plane: the preferred way to manage the SRX3000 line.

The SRX3400 is the base product in the SRX3000 line. It has seven FPC or flexible PIC concentrator slots (a PIC is a physical interface card, with four slots in the front of the chassis and three in the rear). The slots enable network architects to mix and match the cards, allowing them to decide how the firewall is to be configured. The three types of cards that the SRX3400 can use are interface cards, NPCs, and SPCs, and Table 1-13 lists the minimum and maximum number of cards per chassis by type.

Table 1-13. SRX3400 FPC numbers

Type

Minimum

Maximum

Install location

I/O card

0

4

Front slots

SPC

1

4

Any

NPC

1

2

Rear three

The SRX3400 is three rack units high and a full 25.5 inches deep. That’s the full depth of a standard four-post rack. Figure 1-35 shows the front and back of the SRX3400 in which the SFB can be seen as the wide card that is in the top front of the chassis on the left, the FPC slots in both the front and rear of the chassis, and the two slots in the rear of the chassis for the REs. As of Junos 10.2, only one RE is supported in the left slot.

The front and back of the SRX3400

Figure 1-35. The front and back of the SRX3400

Performance on the SRX3400 is impressive, and Table 1-14 lists the maximum performance. The SRX3400 is a modular platform which includes the use of four SPCs, two NPCs, and one IOC. So, it’s no wonder that the SRX3400 can provide up to 175,000 new connections per second, even though this is a huge number and may dwarf the performance of the branch series. The average customer may not need such rates on a continuous basis, but it’s great to have the horsepower in the event that traffic begins to flood through the device.

The SRX3400 can pass a maximum of 20 Gbps of firewall throughput. This limitation comes from two components: the maximum number of NPCs, and interfaces, which limits the overall throughout. As discussed before, each NPC can take a maximum number of 6.5 million packets per second inbound, and in the maximum throughput configuration, one interface card and the onboard interfaces are used. With a total of 20 Gbps ingress, it isn’t possible to get more traffic into the box.

Table 1-14. SRX3400 capacities

Type

Capacity

CPS

175,000

Maximum firewall throughput

20 Gbps

Maximum IPS throughput

6 Gbps

Maximum VPN throughput

6 Gbps

Maximum concurrent sessions

2.25 million

Maximum firewall policies

40,000

Maximum concurrent users

Unlimited

As shown in Table 1-14, the SRX3400 can also provide several other services, such as both IPS and VPN up to 6 Gbps. Each number is mutually exclusive (each SPU has a limited amount of computing power). The SRX3400 can also have a maximum of 2.25 million sessions as of Junos 10.2. In today’s growing environment, a single host can demand dozens of sessions at a time, so 2.25 million sessions may not be a high enough number, especially for larger-scale environments.

If more performance is required, it’s common to move up to the SRX3600. This platform is nearly identical to the SRX3400, except that it adds more capacity by increasing the total number of FPC slots in the chassis. The SRX3600 has a total of 14 FPC slots, doubling the capacity of the SRX3400. This does make the chassis’ height increase to five rack units (the depth remains the same). Table 1-15 lists the minimum and maximum number of cards by type per chassis.

Table 1-15. SRX3600 FPC numbers

Type

Minimum

Maximum

Install location

I/O card

0

6

Front slots

SPC

1

7

Any

NPC

1

3

Last rear three

As mentioned, the SRX3600 chassis is nearly identical to the SRX3400, except for the additional FPC slots. But two other items are different between the two chassis, as you can see in Figure 1-36, where the SRX3600 has an additional card slot above the SFB. Although it currently does not provide any additional functionality, a double-height SFB could be placed in that location in the future. And in the rear of the chassis, the number of power supplies has doubled to four, to support the chassis’ additional power needs. A minimum of two power supplies are required to power the chassis, but to provide full redundancy, all four should be utilized.

The SRX3600

Figure 1-36. The SRX3600

Table 1-16 lists the maximum performance of the SRX3600. These numbers are tested with a configuration of two 10G I/O cards, three NPCs, and seven SPCs. This configuration provides additional throughput. The firewall capabilities rise to a maximum of 30 Gbps, primarily because of the inclusion of an additional interface module and NPC. The VPN and IPS numbers also rise to 10 Gbps, while the CPS and session maximums remain the same. The SRX3000 line utilizes a combo-mode CP processor, where half of the processor is dedicated to processing traffic and the other to setup sessions. The SRX5000 line has the capability of providing a full CP processor.

Table 1-16. SRX3600 capacities

Type

Capacity

CPS

175,000

Maximum firewall throughput

30 Gbps

Maximum IPS throughput

10 Gbps

Maximum VPN throughput

10 Gbps

Maximum concurrent sessions

2.25 million

Maximum firewall policies

40,000

Maximum concurrent users

Unlimited

IOC modules

In addition to the built-in SFP interface ports, you can use three additional types of interface modules with the SRX3000 line, and Table 1-17 lists them by type. Each interface module is oversubscribed, with the goal of providing port density rather than line rate cards. The capacity and oversubscription ratings are also listed.

Table 1-17. SRX3000 I/O module summary

Type

Description

10/100/1000 copper

16-port 10/100/1000 copper with 1.6:1 oversubscription

1G SFP

16-port SPF with 1.6:1 oversubscription

10G XFP

2 × 10G XFP with 2:1 oversubscription

Table 1-17 lists two types of 1G interface card, and both contain 16 1G interface slots. The media type is the only difference between the modules, and one has 16 1G 10/100/1000 copper interfaces and the other contains 16 SFP ports. The benefit of the 16 SFP interfaces is that a mix of fiber and copper interfaces can be used as opposed to the fixed-copper-only card. Both of the cards are oversubscribed to a ratio of 1.6:1.

The remaining card listed in Table 1-17 is a 2 × 10G XFP card. This card provides two 10G interfaces and is oversubscribed by a ratio of 2:1. Although the card is oversubscribed by two times, the port density is its greatest value because providing more ports allows for additional connectivity into the network. Most customers will not require all of the ports on the device to operate at line rate speeds, and if more are required, the SRX5000 line can provide these capabilities.

Each module has a 10G full duplex connection into the fabric. This means 10 gigabits of traffic per second can enter and exit the module simultaneously, providing a total of 20 gigabits of traffic per second that could traverse the card at the same time.

SRX5000

The SRX5000 line of firewalls are the big iron in the SRX Series, true in both size and capacity. The SRX5000 line provides maximum modularity in the number of interface cards and SPCs the device can utilize, for a “build your own services gateway” approach while allowing for expansion over time.

The SRX5000 line currently comes in two different models: the SRX5600 and the SRX5800. Fundamentally, both platforms are the same. They share the same major components, except for the chassis and how many slots are available, dictating the performance of these two platforms.

The first device to review is the SRX5600. This chassis is the smaller of the two, containing a total of eight slots. The bottom two slots are for the switch control boards (SCBs), an important component in the SRX5000 line as they contain three key items: a slot to place the RE; the switch fabric for the device; and one of the control plane networks.

The RE in the SRX5000 line is the same concept as in the SRX3000 line, providing all of the chassis and configuration management functions. It also runs the processes that run the routing protocols (if the user chooses to configure them). The RE is required to run the chassis and it has a serial port, an auxiliary console port, a USB port, and an out-of-band management Ethernet port. The USB port can be used for loading new firmware on the device, while the out-of-band Ethernet port is the suggested port for managing the SRX.

The switch fabric is used to connect the interface cards and the SPCs together, and all traffic that passes through the switch fabric is considered to be part of the data plane. The control plane network provides the connectivity between all of the components in the chassis. This gigabit Ethernet network is used for the RE to talk to all of the line cards. It also allows for management traffic to come back to the RE from the data plane. And if the RE was to send traffic, it goes from the control plane and is inserted into the data plane.

Only one SCB is required to run the SRX5600; a second SCB can be used for redundancy. (Note that if just one SCB is utilized, unfortunately the remaining slot cannot be used for an interface card or an SPC.) The SRX5600 can utilize up to two REs, one to manage the SRX and the other to create dual control links in HA.

On the front of the SRX5600, as shown in Figure 1-37, is what is called a craft port. This is the series of buttons that are labeled on the top front of the chassis, allowing you to enable and disable the individual cards. The SRX5600, unlike the SRX5800, can use 120v power which may be beneficial in environments where 220v power is not available, or without rewiring certain locations. The SRX5600 is eight rack units tall and 23.8 inches deep.

The SRX5600

Figure 1-37. The SRX5600

The SRX5000 line is quite flexible in its configuration, with each chassis requiring a minimum of one interface module and one SPC. Traffic must be able to enter the device and be processed; hence these two cards are required. The remaining slots in the chassis are the network administrator’s choice. This offers several important options.

The SRX5000 line has a relatively low barrier of entry because just a chassis and a few interface cards are required. In fact, choosing between the SRX5600 and the SRX5800 comes down to space, power, and long-term expansion.

For space considerations, the SRX5600 is physically half the size of the SRX5800, a significant fact considering that these devices are often deployed in pairs, and that two SRX5800s take up two-thirds of a physical rack. In terms of power, the SRX5600 can run on 110v, while the SRX5800 needs 220v.

The last significant option between the SRX5600 and the SRX5800 data center devices is their long-term expansion capabilities. Table 1-18 lists the FPC slot capacities in the SRX5600. As stated, the minimum is two cards, one interface card and one SPC, leaving four slots that can be mixed and matched among cards. Because of the high-end fabric in the SRX5600, placement of the cards versus their performance is irrelevant. This means the cards can be placed in any slots and the throughput is the same, which is important to note since in some vendors’ products, maximum throughput will drop when attempting to go across the back plane.

Table 1-18. SRX5600 FPC numbers

Type

Minimum

Maximum

Install location

FPC slots used

1 (SCB)

8

All slots are FPCs

I/O card

1

5

Any

SPC

1

5

Any

SCB

1

2

Bottom slots

In the SRX5800, the requirements are similar. One interface card and one SPC are required for the minimum configuration, and the 10 remaining slots can be used for any additional combination of cards. Even if the initial deployment only requires the minimum number of cards, it still makes sense to look at the SRX5800 chassis. It’s always a great idea to get investment protection out of the purchase. Table 1-19 lists the FPC capacity numbers for the SRX5800.

Table 1-19. SRX5800 FPC numbers

Type

Minimum

Maximum

Install location

FPC slots used

2 (SCBs)

14

All slots are FPCs

I/O card

1

11

Any

SPC

1

11

Any

SCB

2

3

Center slots

The SRX5800 has a total of 14 slots, and in this chassis, the two center slots must contain SCBs, which doubles the capacity of the chassis. Since it has twice the number of slots, it needs two times the fabric. Even though two fabric cards are utilized, there isn’t a performance limitation for going between any of the ports or cards on the fabric (this is important to remember, as some chassis-based products do have this limitation). Optionally, a third SCB can be used, allowing for redundancy in case one of the other two SCBs fails.

Figure 1-38 illustrates the SRX5800. The chassis is similar to the SRX5600, except the cards are positioned perpendicular to the ground, which allows for front-to-back cooling and a higher density of cards within a 19-inch rack. At the top of the chassis, the same craft interface can be seen. The two fan trays for the chassis are front-accessible above and below the FPCs.

The SRX5800

Figure 1-38. The SRX5800

In the rear of the chassis there are four power supply slots. In an AC electrical deployment, three power supplies are required, with the fourth for redundancy. In a DC power deployment, the redundancy is 2+2, or two active supplies and two supplies for redundancy. Check with the latest hardware manuals for the most up-to-date information.

The performance metrics for the SRX5000 line are very impressive, as listed in Table 1-20. The CPS rate maxes out at 350,000, which is the maximum number of packets per second that can be processed by the central point processor. This is three per CPS multiplied by 350,000, or 1.05 million packets per second, and subsequently is about the maximum number of packets per second per SPU. Although this many connections per second is not required for most environments, at a mobile services provider, a large data center, or a full cloud network—or any environment where there are tens of thousands of servers and hundreds of thousands of inbound clients—this rate of connections per second may be just right.

Table 1-20. SRX5000 line capacities

Type

SRX5600 capacity

SRX5800 capacity

CPS

350,000

350,000

Maximum firewall throughput

60 Gbps

120 Gbps

Maximum IPS throughput

15 Gbps

30 Gbps

Maximum VPN throughput

15 Gbps

30 Gbps

Maximum concurrent sessions

9 million

10 million

Maximum firewall policies

80,000

80,000

Maximum concurrent users

Unlimited

Unlimited

For the various throughput numbers shown in Table 1-20, each metric is doubled from the SRX5600 to the SRX5800, so the maximum firewall throughput number is 60 Gbps on the SRX5600 and 120 Gbps on the SRX5800. This number is achieved utilizing HTTP large gets to create large stateful packet transfers; the number could be larger if UDP streams are used, but that is less valuable to customers, so the stateful HTTP numbers are utilized. The IPS and VPN throughputs follow the same patterns. These numbers are 15 Gbps and 30 Gbps for each of these service types on the SRX5600 and SRX5800, respectively.

The IPS throughput numbers are achieved using the older, NSS 4.2.1 testing standard. Note that this is not the same test that is used to test the maximum firewall throughput. The NSS test accounts for about half of the possible throughput of the large HTTP transfers, so if a similar test were done with IPS, about double the amount of throughput would be achieved.

These performance numbers were achieved using two interface cards and four SPCs on the SRX5600. On the SRX5800, four interface cards and eight SPCs were used. As discussed throughout this section, it’s possible to mix and match modules on the SRX platforms, so if additional processing is required, more SPCs can be added. Table 1-21 lists several examples of this “more is merrier” theme.

Table 1-21. Example SRX5800 line configurations

Example network

IOCs

SPCs

Goal

Mobile provider

1

6

Max sessions and CPS

Financial network

2

10

Max PPS

Data center IPS

1

11

Maximum IPS inspection

Maximum connectivity

8 flex IOCs

4

64 10G interfaces for customer connectivity

A full matrix and example use cases for the modular data center SRX Series could fill an entire chapter in a how-to data center book. Table 1-21 highlights only a few, the first for a mobile provider. A mobile provider needs to have the highest number of sessions and the highest possible CPS, which could be achieved with six SPCs. In most environments, the total throughput for a mobile provider is low, so a single IOC should provide enough throughput.

In a financial network, the packets-per-second rate, or PPS, is the most important metric. To provide these rates, two SPCs are used, each configured using NPU bundling to allow for 10 Gbps ingress of small 64-byte packets. The 10 SPCs are used to provide packet processing and security for these small packets.

In a data center environment, an SRX may be deployed for IPS capabilities only, so here the SRX would need only one IOC to have traffic come into the SRX. The remaining 11 slots would be used to provide IPS processing, allowing for a total of 45 Gbps IPS inspection in a single SRX. That is an incredible amount of inspection in a single chassis.

The last example in Table 1-21 is for maximum connectivity. This example offers 64 10G Ethernet ports. These ports are oversubscribed at a ratio of 4:1, but again the idea here is connectivity. The remaining four slots are dedicated to SPCs. Although the number of SPCs is low, this configuration still provides up to 70 Gbps of firewall throughput. Each 10G port could use 1.1 Gbps of throughput simultaneously.

IOC modules

The SRX5000 line has three types of IOCs, two of which provide line rate throughput while the remaining is oversubscribed. Figure 1-39 illustrates an example of the interface complex of the SRX5000 line. The image on the left is the PHY, or physical chip, that handles the physical media. Next is the NPU or network processor. And the last component is the fabric chip. Together, these components make up the interface complex. Each complex can provide 10 gigabits per second in both ingress and egress directions, representing 20 gigabits per second full duplex of throughput.

Interface complex of the SRX5000 line

Figure 1-39. Interface complex of the SRX5000 line

Each type of card has a different number of interface complexes on it, with Table 1-22 listing the number of interface complexes per I/O type. Each complex is directly connected to the fabric, meaning there’s no benefit to passing traffic between the complexes on the same card. It’s a huge advantage of the SRX product line because you can place any cards you add anywhere you want in the chassis.

Table 1-22. Complexes per line card type

Type

Complexes

4 × 10G

4

40 × 1G

4

Flex IOC

2

The most popular IOC for the SRX is the four-port 10 gigabit card. The 10 gigabit ports utilize the XFP optical transceivers. Each 10G port has its own complex providing 20 Gbps full duplex of throughput, which puts the maximum ingress on a 4 × 10G IOC at 40 Gbps and the maximum egress at 40 Gbps.

The second card listed in Table 1-22 is the 41 gigabit SFP IOC. This blade has four complexes, just as the four-port 10 gigabit card has, but instead of four 10G ports, it has 10 1G ports. The blade offers the same 40 Gbps ingress and 40 Gbps egress metrics of the four-port 10 gigabit card, but this card also supports the ability to mix both copper and fiber SFPs.

The last card in Table 1-22 is the modular or Flex IOC. This card has two complexes on it, with each complex connected to a modular slot. The modular slot can utilize one of three different cards:

  • The first card is a 16-port 10/100/1000 card. It has 16 tri-speed copper Ethernet ports. Because it has 16 1G ports and the complex it is connected to can only pass 10 Gbps in either direction, this card is oversubscribed by a ratio of 1.6:1.

  • Similar to the first card is the 16-port SFP card. The difference here is that instead of copper ports, the ports utilize SFPs and the SFPs allow the use of either fiber or copper transceivers. This card is ideal for environments that need a mix of fiber and copper 1G ports.

  • The last card is the dense four-port 10G card. It has four 10-gigabit ports. Each port is still an XFP port. This card is oversubscribed by a ratio of 4:1 and is ideal for environments where connectivity is more important than line rate throughput.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required