More Fallout from Title II Order: A New Rule That Could Degrade Internet Performance

packet

One of the major concerns with the FCC’s recently adopted Title II order is that it goes far beyond reasonable efforts to ensure an open Internet, and potentially interferes with Internet operational issues that historically have been handled without any government involvement. The FCC hasn’t even published its rules yet, and already we are seeing chatter in Internet circles about FCC engineering judgments and unintended consequences.

One example is the wonky topic of packet loss. Buried in the FCC’s announcement is this sentence: “Disclosures must also include packet loss as a measure of network performance.” Packet loss occurs when one or more packets of data traveling across a computer network fail to reach their destination. While one might think that packet loss is an appropriate way to measure network performance (because the fewer the lost packets the better), the reality is more complex.

Network performance is impacted by a variety of factors that include latency, jitter, path length, number of available paths, path choice, number of concurrent Internet sessions sharing a link, and others. Measuring and reporting on packet loss in isolation doesn’t account for these other factors.

The FCC unilaterally deciding which Internet metrics ISPs should focus on is problematic, and a major departure from how the Internet was created and developed. In fact, standard Internet Protocol was wisely designed so that it’s free to drop some packets in favor of others based on the unique needs of certain types of data. As the Internet Engineering Task Force (IETF) has recognized, the Internet supports multiple applications, where each application needs different delivery requirements. Some of these applications need reliable delivery while others work just fine with occasional packets being dropped. The IETF even developed a protocol called Transmission Control Protocol (TCP) that can be used by applications that cannot tolerate any of their data getting lost when a packet gets dropped.

What has made the Internet great is the fact that application programmers could design their programs to work however they saw fit. These programmers figured out how to make their applications work over the Internet even if packets got dropped occasionally along the way.

“The FCC unilaterally deciding which Internet metrics ISPs should focus on is problematic, and a major departure from how the Internet was created.”

A focus on packet loss has the potential to improperly incentivize ISPs to optimize their networks to this parameter. In an effort to minimize packet loss, equipment vendors and Internet engineers would inevitably add buffers to network interfaces to absorb packet bursts in order to avoid dropping packets.

While that may sound positive, it would actually degrade Internet performance since packets would now have to wait in long lines to go through the Internet. These long lines, or big buffers of packets, would also have the potential to break the TCP protocol, as TCP was also designed from the beginning knowing the network would drop packets when it reached capacity. TCP uses this as a signal to itself to slow down the transmission of data to reduce the likelihood of future packets from getting dropped. With a focus by the FCC on minimization of packet loss, many applications such as Skype, Google Hangouts, FaceTime, and gaming apps may no longer work as well as we all expect.

The FCC’s packet loss focus is also a major departure from the typical process for establishing Internet performance measures, and a complete repudiation of the collaborative approach to measurement that the Commission has followed so successfully in the Measuring Broadband America program. The IETF and its protocols have served the Internet well up until now, but the inclusion of packet loss as a measure of network performance has the potential to severely disrupt the Internet as we know it. As the FCC embarks on a new era of Internet regulation, it should recognize how harmful its actions can be and commit to leaving the engineering to the Internet Engineering Task Force.