Menu

M2M FEATURE NEWS

Accelerating Packet Capture in High-Speed Networks

By Special Guest
Dan Joe Barry, VP Positioning and Chief Evangelist, Napatech
July 02, 2015

In an Internet of Things (IoT) world, networks are being bombarded with data from hundreds, even thousands of end points, at network speeds up to 100Gbps. Today’s network professional not only needs a toolbox that is layered with a variety of solutions to protect the enterprise network, he needs it in real time with nanosecond precision.

While there are a number of tools and capabilities available to engineers and administrators to help them manage and secure large and small-scale networks alike, few capabilities are as fundamental to this task as packet capture (PCAP). A mechanism for intercepting data packets that are traversing a computer network, PCAP is a common capability deployed within an organization to monitor security events and network performance, identify data leaks, troubleshoot issues and even perform forensic analysis to determine the impact of network breaches.

The bad news is that as network speeds continue to increase, existing PCAP systems using commodity network interface cards (NICs) are struggling to keep up with the demands of performing precision capture and replay at 10/40/100 Gbps speeds.

The good news is that there are solutions today that have been built to facilitate packet capture at speeds topping 100 Gbps. The use of network acceleration technology, coupled with open source network monitoring and capture solutions, can enable organizations to keep up with the demands of precision packet capture and replay on high-speed networks.

PCAP Analysis Systems
Effective PCAP and analysis systems can provide administrators and engineers with an accurate, real-time view of what is happening within a network infrastructure. Likewise, precision PCAP systems also provide organizations with the ability to re-create network events with high fidelity for verification and validation of architectural changes, troubleshooting and analysis.

When researching analysis and security solutions for high-speed networks, it is important to consider the coupling of open source tools with the speed and accuracy of programmable logic. Here are three key factors when comparing your options:

Line-Rate Capture and Replay: FPGA-based network acceleration cards (NACs) are ideal for performing high-speed packet capture and replay at a variety of speeds, including 1/10/40/100 Gbps. Moreover, NACs allow for precise inter-frame gap (IFG) control, which is critical when replaying captured traffic for troubleshooting or simulation of traffic flows.

Precision time stamping: Explore solutions that provide hardware-based, high-precision time stamping with nanosecond resolution for every frame captured and transmitted. Hardware-based time stamping avoids the unpredictable latency inherent in software-based solutions and enables a communication flow to be recorded precisely as it occurs. Precision time protocol (PTP) can also be supported for accurate synchronization across distributed network probes.

Intelligent Data Flow: To maintain capture and analysis performance at high speeds, it is important to implement technology that has the capability to identify and direct traffic flows immediately upon ingress. In doing so, the load on user-space applications can be minimized and administrators are provided with the ability to dynamically identify and direct data flows into specific CPU cores based on the type of traffic being analyzed.

Technical Approach - Conventional PCAP
Historically, organizations have relied on software tools to perform packet capture and analysis on their network infrastructure. In this case, software is installed on a designated monitoring host and configured to poll packets from a commodity network adapter placed in promiscuous mode and connected to the network via a Switched Port Analyzer (SPAN) interface.

In this scenario, each time the network adapter receives an Ethernet frame, it generates an interrupt request and copies the data from the memory buffer on the adapter into kernel space. Normally the kernel space driver would determine if the packet is intended for this host and either drop the packet or pass it up the protocol stack until it reaches the user-space application it is destined for. However, when configured for promiscuous mode, all packets are captured in a kernel buffer regardless of destination host. Once the kernel buffer is full, a context switch is performed to transfer the data to a user-space buffer managed by libpcap, a system-independent interface for user-level packet capture, so that the data can be accessed by user-level applications. 

This intermediate buffer remains hidden to user-level applications and is necessary to prevent applications from accessing kernel-managed memory. Given this architecture, it is clear that some amount of time will lapse between when a frame is received by the adapter and actually delivered to the user-space application for processing.

At low data rates this lapse in time does little to affect PCAP accuracy, but at higher rates this latency is compounded and CPUs become saturated trying to keep pace with incoming data leading to capture loss and timing issues.

Consider, for example, that a 1 Gbps network link can push around 1.5 million packets per second, or one packet every 670 nanoseconds. Conversely, at 10 and 100 Gbps speeds systems are processing one packet every 67 or 6.7 nanoseconds respectively.

Simply capturing traffic at this rate in a conventional architecture is enough of a challenge without the added complexity of precise timing, categorization, flow identification and filtering. Performing lossless, high-fidelity packet capture, replay and real-time analysis of data flows at these rates requires a different approach to PCAP, one that moves the bulk of the data processing out of the user-space and into the hardware while also eliminating the inefficiency of user-to-kernel space interactions.

Accelerated PCAP Architecture
Achieving the goals of PCAP on high-speed networks is possible with a hardware-accelerated approach. The targeted use of programmable logic coupled with open source tools allows data to be accurately captured and processed within a network acceleration card (NAC) before it is passed into user-space applications.

High-performance NACs use Field Programmable Gate Arrays (FPGAs) to perform in-line event processing and line-rate packet analysis in hardware at 1/10/40/100 Gbps speeds. Due to their programmable nature, FPGAs play an important role in, and are an ideal fit for, many different markets. These semiconductor devices are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. FPGAs can be reprogrammed to desired application or functionality requirements after manufacturing. Through the use of FPGA-based NACs, network administrators can immediately improve an organization’s ability to monitor and react to events that occur within its network infrastructure.

In this accelerated PCAP architecture, line-rate packet analysis is leveraged to push most of the frame processing into the hardware of the capture device, which can be deployed within a commodity server or workstation, preserving CPU cycles for higher-level analysis. This approach ensures that by the time data is passed to the user-space buffer for access by applications it has already been time stamped, categorized, and filtered appropriately. 

By coupling these devices with open source applications, powerful – yet cost effective – solutions can be built for a variety of purposes. In general, high-performance NACs enable easy in-house development of scalable, high-performance network applications over PCAP. Even complex payload analysis and network-wide correlation algorithms can be easily scaled by the effective flow-based load-balancing mechanism built-in to the NAC. The more complex analysis that the application performs, the more critical it is that the PCAP stream from the capture device has no packet drops and that the frames are in the correct order. Tasks like protocol reconstruction, reassembly, event detection and QoS calculations are severely impacted by insufficient PCAP performance.

Consider solutions that include support for IEEE 1588, or Precision Time Protocol (PTP). In doing so, precise time synchronization is maintained in a distributed deployment where multiple accelerated PCAP probes are deployed throughout a network infrastructure. This allows frames to be merged from multiple ports on multiple NACs into a single, time-ordered analysis stream.

Maintaining this level of temporal fidelity within the capture ensures that organizations can perform retrospective analysis of network events by replaying data in exactly the same way as it was captured, complete with precise timing and inter-frame gap control.

Providing a real-time view of what is happening within a network, as well as the ability to perform a retrospective review of activity, is critical to understanding and measuring performance, identifying bottlenecks, troubleshooting issues, and securing the environment. As such, packet capture and analysis continues to play a critical role in managing and securing large and small-scale networks.

Conclusion
The reality is that traditional means of performing PCAP are being outpaced by today’s high-speed network fabrics, leading to large amounts of dropped packet data and imprecise collections.

Enabling PCAP at 10/40/100 Gbps speeds, and beyond, necessitates that the processing of captured packets be pushed to the point of ingest, leveraging hardware acceleration to maintain precise, lossless capture at these speeds. By using programmable logic and open source software deployed on commodity servers, a novel architecture can be conceived to meet the demands of PCAP on high-speed networks for years to come.

Daniel Joseph Barry is VP Positioning and Chief Evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.  From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.




Edited by Ken Briodagh
Get stories like this delivered straight to your inbox. [Free eNews Subscription]


SHARE THIS ARTICLE
Related Articles

Beyond the Closet, Connecting to IoT

By: Gary Audin    11/11/2020

Two challenges arise when considering cable based IoT.

Read More

Banyan Security Enhances Secure Remote Access for Engineering Resources

By: Ken Briodagh    10/27/2020

Banyan's Continuous Authorization Can Grant or Revoke Access to Sensitive Engineering Environments and Applications in Real-time Based on TrustScore

Read More

Senet Eyes RAN Partnerships as Key to Delivering Network Services for Massive IoT

By: Arti Loftus    10/21/2020

To meet the challenges that come with providing network connectivity for IoT solutions, Senet is executing a strategy for massive IoT that will be bui…

Read More

mimik Selected by 5G Open Innovation Lab to Drive Early Adoption of 5G

By: Ken Briodagh    10/15/2020

mimik's patented Hybrid Edge Cloud platform will boost the performance and reduce the cost of 5G Networks

Read More

5G Sets New Standards for Vertical Industries' IoT Connectivity

By: Special Guest    10/13/2020

As 5G rolls out across the world, vertical industries across IoT are working on additional standards to make the technology suitable for their industr…

Read More