Bandwidth
Anyone with an internet connection is probably fairly familiar with the concept of bandwidth. The monthly rates for an internet service are typically (at least in the US) tiered by the maximum bandwidth provided. In professional programming vernacular, the term bandwidth is often used, somewhat loosely, to refer to the amount of time or mental capacity a team or team member can dedicate to new tasks. Each of us should have a somewhat intuitive understanding of the concept. Put simply, it's the maximum rate of data transmission over a given network connection.
While that definition might seem basic, or even trivial, the way that bandwidth drives the standards for packet size and structure may be less obvious. So, let's consider more thoroughly what bandwidth describes and how it impacts data transmission. There are two things to consider when we're discussing bandwidth: the speed of throughput and the channel's maximum capacity.
The easiest way to conceptualize these concepts is through the analogy of a highway. Imagine that you're the operator of a tollbooth on this hypothetical highway. However, for this analogy, let's say that, instead of collecting a toll, you're responsible for counting the total number of cars that move past your booth over a given period of time. The cars on your highway represent individual bits of data. Every time a car crosses your toll booth, you tally it. The total number of cars that cross your booth in any given time represents the bandwidth of your highway over that time period. With this analogy in place, let's see how the throughput and channel capacity can impact that bandwidth.
In this characterization, the speed of throughput is analogous to the speed limit of your highway. It's the physical maximum velocity that a signal can travel over a connection. There are a number of factors that can impact or change this speed, but in most cases, the physics of electrical or optical signals traveling over their respective media render the impact of those changes negligible. Speed will ultimately boil down to the physical limits of the transmission medium itself. So, for example, fiber-optic cables will have a much higher throughput speed than copper wire. Fiber-optic cables transmit data at speeds approaching the speed of light, but copper wire introduces resistance to electrical current, slowing and weakening any data signal traveling over it. So, in the context of our highway analogy, fiber-optic cable networks have a much higher speed limit than copper cables. Sitting in your tollbooth over a single shift, more cars will pass by on a highway with a higher speed limit. Given this fact, it can be trivially simple to increase the bandwidth of a network by taking the basic step to upgrade your transmission media.
While the speed of throughput is a strong determinant of bandwidth, we should also take a moment to consider the maximum capacity of a given channel. Specifically, this refers to how many physical wires can actively carry an individual bit at any given moment along a channel. In our highway analogy, the channel capacity will describe the number of lanes on our highway that a car could travel. So, imagine that instead of a single-file line of cars moving down a single lane of our highway, it's been expanded to four lanes in one direction. So now, at any given moment, we could have four cars, or four bits of data, moving through our tollbooth at any given moment.
Obviously, it's the responsibility of system programmers writing firmware for network interface devices to write support for properly handling multiple simultaneous channels. However, as I'm sure you can imagine, variable channel capacity can demand very specific optimizations for the network entities responsible for breaking your data into atomic packets.