
I have to admit that if Ward Cobley of VIAVI ever decides to start a second career as a comedian he’s going to do very well. His delivery is spot on and the subject he discussed at Tech Field Day Extra at Cisco Live US 2025 was humorous in a non-traditional way. It did get me to start thinking about the limitations of LLMs when it comes to packet capture analysis.
Packet Overload
Ward and the VIAVI team had a simple goal in mind. Feed a packet capture to the most popular LLMs and ask it to find any issues. The fact that there was most definitely an issue in the packet capture ensured that there should be some output. In particular it was a 132-second delay in a server response to a client request. In the world of TCP, two minutes and seventeen seconds might as well be an eternity.
The first problem that came up was that most LLMs can’t take a raw packet capture file, or PCAP, and digest it. As smart as AI might be it doesn’t have a way to decode that information despite how ubiquitous Wireshark has become in the networking industry. That means you’re going to have to convert your PCAP to something like JSON. That is its own special kind of nightmare.
Okay, PCAP converted and now it’s time to upload the file to the LLM. How big is that file exactly? Because most LLMs will lose their imaginary minds when you try to feed them hundreds of megabytes or gigabytes of traffic. Ward and his team found that anything over about 200 packets worth of data would choke most of them. Some, such as an early version of Microsoft Copilot, needed the problem narrowed down to about 20 packets. If you, as a network engineer, have narrowed the packet capture down to about 20 packets I think you wouldn’t need AI to tell you where the problem is.
Back to the problem at hand. You’ve converted the file to JSON and narrowed the focus down to a hundred packets or so. What did the LLMs find? In the above video you can see that most of them missed the big problem. ChatGPT missed that one and flagged a 64ms delay as “big” and others stated that the response times were “fast”.
How about the LLM that didn’t even answer the question and hallucinated that one of the packets was a stock transaction with bid and ask prices, which it most certainly was not? How can you miss the big thing and just make something up out of nowhere because you feel like you have to answer the question?
Asking the Right Questions
This entire exercise has done a perfect job of illustrating why certain applications are still too complex for AI to really take over. The most obvious problem is that asking AI to analyze something like a massive packet capture is beyond the capabilities of the system today. Not only do you need to figure out where the problem might be but you have to convert the file and hope you don’t lose context. If you can narrow the issues down to a few seconds of packet exchange you’re likely intelligent enough to do the analysis without the help of AI.
The other issue comes when you ask the LLM to do some analysis and it doesn’t come back with a good answer. Which sends you back to the drawing board to ask again in a different way with a tweaked prompt. It reminds me quite a bit of a senior engineer asking a junior team member over and over again if something doesn’t look right somewhere. They don’t know it yet but the questions are designed to make them look closer. However, in that case you’re training someone to be a valuable team member. With an LLM you’re just hoping that one of those prompts produces output that could get you somewhere in the vicinity of an answer.
The other issue that I think people need to understand is that uploading packet capture data to an LLM creates a new attack surface. Internal addressing information and naming conventions mean that anyone that can expose that data could have reconnaissance information for your network. Why would you increase your attack surface like that just because you’re wondering if the AI sees something you don’t? I can see that you’re creating hassles and opportunities for exploitation from here!
Bringing IT All Together
I laughed quite a bit at Ward’s presentation because it highlights just how big the gap is between what everyone thinks AI is capable of and what can actually accomplish today. We think that we can just force feed data to these magical LLMs and what comes out the other side is golden. In fact, companies like VIAVI that have spent years building sophisticated packet analysis tools know the hard work that has to go into not only digesting the data but providing insights. I’m not worried about Ward’s side gig as a comedian because he’s got a long career ahead of him doing what AI and LLMs choke on today.
For more information about VIAVI and their packet capture and analysis tools, check out their website at https://www.viavisolutions.com/. To see their presentation from Tech Field Day Extra at Cisco Live US 2025, check out their appearance page here.