Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Bandwidth and Processing Power

download PDF
Of the two resources discussed in this chapter, one (bandwidth) is often hard for the new system administrator to understand, while the other (processing power) is usually a much easier concept to grasp.
Additionally, it may seem that these two resources are not that closely related -- why group them together?
The reason for addressing both resources together is that these resources are based on the hardware that tie directly into a computer's ability to move and process data. As such, their relationship is often interrelated.

3.1. Bandwidth

At its most basic, bandwidth is the capacity for data transfer -- in other words, how much data can be moved from one point to another in a given amount of time. Having point-to-point data communication implies two things:
  • A set of electrical conductors used to make low-level communication possible
  • A protocol to facilitate the efficient and reliable communication of data
There are two types of system components that meet these requirements:
  • Buses
  • Datapaths
The following sections explore each in more detail.

3.1.1. Buses

As stated above, buses enable point-to-point communication and use some sort of protocol to ensure that all communication takes place in a controlled manner. However, buses have other distinguishing features:
  • Standardized electrical characteristics (such as the number of conductors, voltage levels, signaling speeds, etc.)
  • Standardized mechanical characteristics (such as the type of connector, card size, physical layout, etc.)
  • Standardized protocol
The word "standardized" is important because buses are the primary way in which different system components are connected together.
In many cases, buses allow the interconnection of hardware made by multiple manufacturers; without standardization, this would not be possible. However, even in situations where a bus is proprietary to one manufacturer, standardization is important because it allows that manufacturer to more easily implement different components by using a common interface -- the bus itself.

3.1.1.1. Examples of Buses

No matter where in a computer system you look, there are buses. Here are a few of the more common ones:
  • Mass storage buses (ATA and SCSI)
  • Networks[9] (Ethernet and Token Ring)
  • Memory buses (PC133 and Rambus®)
  • Expansion buses (PCI, ISA, USB)

3.1.2. Datapaths

Datapaths can be harder to identify but, like buses, they are everywhere. Also like buses, datapaths enable point-to-point communication. However, unlike buses, datapaths:
  • Use a simpler protocol (if any)
  • Have little (if any) mechanical standardization
The reason for these differences is that datapaths are normally internal to some system component and are not used to facilitate the ad-hoc interconnection of different components. As such, datapaths are highly optimized for a particular situation, where speed and low cost are preferred over slower and more expensive general-purpose flexibility.

3.1.2.1. Examples of Datapaths

Here are some typical datapaths:
  • CPU to on-chip cache datapath
  • Graphics processor to video memory datapath

3.1.3. Potential Bandwidth-Related Problems

There are two ways in which bandwidth-related problems may occur (for either buses or datapaths):
  1. The bus or datapath may represent a shared resource. In this situation, high levels of contention for the bus reduces the effective bandwidth available for all devices on the bus.
    A SCSI bus with several highly-active disk drives would be a good example of this. The highly-active disk drives saturate the SCSI bus, leaving little bandwidth available for any other device on the same bus. The end result is that all I/O to any of the devices on this bus is slow, even if each device on the bus is not overly active.
  2. The bus or datapath may be a dedicated resource with a fixed number of devices attached to it. In this case, the electrical characteristics of the bus (and to some extent the nature of the protocol being used) limit the available bandwidth. This is usually more the case with datapaths than with buses. This is one reason why graphics adapters tend to perform more slowly when operating at higher resolutions and/or color depths -- for every screen refresh, there is more data that must be passed along the datapath connecting video memory and the graphics processor.

3.1.4. Potential Bandwidth-Related Solutions

Fortunately, bandwidth-related problems can be addressed. In fact, there are several approaches you can take:
  • Spread the load
  • Reduce the load
  • Increase the capacity
The following sections explore each approach in more detail.

3.1.4.1. Spread the Load

The first approach is to more evenly distribute the bus activity. In other words, if one bus is overloaded and another is idle, perhaps the situation would be improved by moving some of the load to the idle bus.
As a system administrator, this is the first approach you should consider, as often there are additional buses already present in your system. For example, most PCs include at least two ATA channels (which is just another name for a bus). If you have two ATA disk drives and two ATA channels, why should both drives be on the same channel?
Even if your system configuration does not include additional buses, spreading the load might still be a reasonable approach. The hardware expenditures to do so would be less expensive than replacing an existing bus with higher-capacity hardware.

3.1.4.2. Reduce the Load

At first glance, reducing the load and spreading the load appear to be different sides of the same coin. After all, when one spreads the load, it acts to reduce the load (at least on the overloaded bus), correct?
While this viewpoint is correct, it is not the same as reducing the load globally. The key here is to determine if there is some aspect of the system load that is causing this particular bus to be overloaded. For example, is a network heavily loaded due to activities that are unnecessary? Perhaps a small temporary file is the recipient of heavy read/write I/O. If that temporary file resides on a networked file server, a great deal of network traffic could be eliminated by working with the file locally.

3.1.4.3. Increase the Capacity

The obvious solution to insufficient bandwidth is to increase it somehow. However, this is usually an expensive proposition. Consider, for example, a SCSI controller and its overloaded bus. To increase its bandwidth, the SCSI controller (and likely all devices attached to it) would need to be replaced with faster hardware. If the SCSI controller is a separate card, this would be a relatively straightforward process, but if the SCSI controller is part of the system's motherboard, it becomes much more difficult to justify the economics of such a change.

3.1.5. In Summary…

All system administrators should be aware of bandwidth, and how system configuration and usage impacts available bandwidth. Unfortunately, it is not always apparent what is a bandwidth-related problem and what is not. Sometimes, the problem is not the bus itself, but one of the components attached to the bus.
For example, consider a SCSI adapter that is connected to a PCI bus. If there are performance problems with SCSI disk I/O, it might be the result of a poorly-performing SCSI adapter, even though the SCSI and PCI buses themselves are nowhere near their bandwidth capabilities.


[9] Instead of an intra-system bus, networks can be thought of as an inter-system bus.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.