Preamble:
This space will be utilized to synthesize my notes and help improve my learning process while I study for the CompTIA Network+ N10-009 certification exam. Please follow along for more Network+ notes and feel free to ask any questions or, if I get something wrong, offer suggestions to correct any mistakes.
How Data Travels Through Networks
Now that we’ve explored the OSI model and the theory behind networking, let's examine how data actually travels using physical cables or wireless (WiFi) signals. Network data transfer works by modulation, a process of varying the properties of a transmission medium—such as electric current in cables, infrared light for some short-range communication, or radio waves for WiFi—to encode a signal.
One fundamental example of modulation in wired networks is transitioning between low and high voltage states within an electrical circuit. These distinct voltage pulses can represent symbols, which are then systematically mapped to digital bits—the fundamental ones and zeros of computer data.
Each type of transmission medium supports a specific range of possible frequencies, measured in Hertz (Hz), or cycles per second. A crucial principle is that higher frequencies allow for a greater amount of data to be transferred per second. This usable range of frequencies for a given medium is known as its bandwidth.
In the context of data networking, the term "bandwidth" is most commonly used to describe the data transfer capacity, measured in multiples of bits per second (bps), such as megabits per second (Mbps) or gigabits per second (Gbps). It's important to note that due to sophisticated encoding techniques, the actual data transfer rate can often significantly exceed the raw frequency bandwidth of the signal. For instance, a signal occupying a frequency bandwidth of 100 MHz can often achieve data transfer rates much higher than 100 Mbps.
Ethernet
Over time, a multitude of protocols, standards, and hardware products have been developed to implement the functions of the Physical and Data Link layers of the OSI model for wired networks. These Ethernet standards, primarily defined by the Institute of Electrical and Electronics Engineers (IEEE) under the 802.3 working group, are essential for ensuring compatibility and performance across different network hardware. A key function of these standards is to specify the characteristics of cables and connectors, as well as the methods for signal modulation and data encoding.
The IEEE 802.3 Ethernet Standard is exceptionally prevalent in both Local Area Networks (LANs) and Wide Area Networks (WANs). Compliance with these standards guarantees that network cabling will meet the bandwidth requirements of various applications. Ethernet media specifications follow a common three-part naming convention, often referred to as xBASE-y. This convention provides key information:
- Speed or bit rate: Indicates the data transfer rate in megabits per second (Mbps) or gigabits per second (Gbps).
- Signal mode: Specifies whether baseband or broadband transmission is used. In baseband transmission, the entire bandwidth of the cable is used for a single data signal. All mainstream Ethernet technologies utilize baseband transmission, so you will predominantly encounter specifications in the format xBASE-y.
- Media type designator: A letter or combination of letters that identifies the physical medium used for the connection.
For example, 10BASE-T represents an early Ethernet implementation that operates at a speed of 10 Mbps (10), employs baseband signaling (BASE), and runs over twisted pair copper cabling (-T), which typically uses RJ45 connectors.
Copper Cable Ethernet
Copper cable is a common medium for transmitting electrical signals in Ethernet networks. The physical cable connecting two network devices (nodes) forms a low-voltage electrical circuit between their network interfaces. Two primary types of copper cable are used in Ethernet: twisted pair and coaxial (coax), the latter often utilizing BNC connectors. A significant limitation of copper cable is attenuation, where the signal strength weakens rapidly over longer distances.
Twisted pair cable is categorized according to Category (CAT) standards (e.g., Cat 5, Cat 5e, Cat 6), which define the bandwidth and maximum distance the cable can reliably support. Higher CAT numbers generally indicate better performance characteristics, allowing for higher data rates and sometimes longer cable runs.
Media Access Control and Collision Domains
Ethernet operates as a multiple access area network, meaning that the available communication capacity of the shared physical medium is accessible to all connected network devices (nodes). Media Access Control (MAC) refers to the set of rules and procedures that a network technology employs to determine when a node is permitted to transmit data on the shared medium and how to handle potential conflicts, such as two devices attempting to transmit simultaneously.
Ethernet utilizes a contention-based MAC system. In this approach, each network node connected to the same shared medium resides within the same collision domain. A collision occurs when two or more nodes transmit data at precisely the same moment. When this happens, the electrical signals interfere with each other, and neither transmission successfully reaches its intended destination. As a result, the data packets involved in the collision must be retransmitted, which reduces the overall effective bandwidth of the network. The frequency of collisions tends to increase as more devices are added to the same collision domain, leading to a further reduction in the effective data rate.
The specific Ethernet protocol that governs contention and media access is Carrier Sense Multiple Access with Collision Detection (CSMA/CD). The "Carrier Sense" aspect means that a node will listen to the network medium to check if it is currently in use before attempting to transmit. "Multiple Access" indicates that multiple nodes share the same medium. "Collision Detection" refers to the mechanism by which a node actively monitors the medium while transmitting to detect if a collision has occurred.
A collision is identified when a network interface detects the presence of a signal on both its transmit and receive lines simultaneously. Upon detecting a collision, the transmitting node immediately broadcasts a special jam signal across the network. This jam signal alerts all other nodes on the segment that a collision has occurred. Following a collision and the transmission of the jam signal, each node that was attempting to transmit waits for a random period of time, known as the backoff interval, before attempting to retransmit its data.
The inherent nature of the collision detection mechanism in older Ethernet implementations means that only half-duplex transmission is possible. In half-duplex communication, a network node can either transmit data or receive data, but it cannot perform both actions at the same time.
In the early 10BASE-T physical topology, which employed a star wiring configuration, each network node was connected via its own cable to a central device called an Ethernet hub. A hub operates at the Physical layer of the OSI model and simply repeats any incoming electrical signal received on one port out to all other connected ports. Consequently, every host connected to the same hub is within the same collision domain. It's important to note that this 10BASE-T physical topology largely dates back to around 1990 and is very unlikely to be found in active deployment in modern network environments.
100BASE-TX Fast Ethernet Standards
The Fast Ethernet standard, designated as 100BASE-TX when operating over twisted pair copper cabling, builds upon the fundamental principles of 10BASE-T but incorporates higher frequency signaling and more efficient encoding methods. This advancement increased the data transfer rate from 10 Mbps to 100 Mbps. 100BASE-TX typically requires Category 5 (Cat 5) or better twisted pair copper cable and has a maximum supported link length of 100 meters (328 feet).
While 100BASE-TX could technically be implemented using a hub, the standard emerged at a time when network switches were beginning to replace hubs as the primary connection point for end systems. The contention-based media access method inherent to hubs does not scale efficiently to accommodate a large number of devices within a single collision domain.
A network hub operates solely at the Physical layer, simply repeating signals. In contrast, a network switch operates at the Data Link layer (Layer 2) of the OSI model. Switches analyze the source and destination Media Access Control (MAC) addresses contained within Layer 2 frames to establish temporary, direct communication paths (circuits) between two specific nodes. Unlike a hub, each port on a network switch constitutes its own separate collision domain. By effectively eliminating the negative impact of contention, switches enable full-duplex transmissions, where a network node can transmit and receive data simultaneously. Furthermore, each node connected to a switch port can utilize the full 100 Mbps bandwidth of its dedicated cable link to that port.
To ensure compatibility with older network devices still equipped with 10 Mbps Ethernet interfaces, Fast Ethernet introduced an autonegotiation protocol. This protocol allows a network device to automatically detect and select the highest supported connection parameters for a given link, including the data rate (10 Mbps or 100 Mbps) and the duplex mode (half-duplex or full-duplex). The original 10BASE-T Ethernet standard specified that a node should transmit regular electrical pulses, known as Normal Link Pulses, when it was not actively transmitting data, primarily to confirm the physical viability of the network link. Fast Ethernet enhanced this mechanism by encoding a 16-bit data packet containing information about its service capabilities into this link integrity signal. This enhanced signal is called a Fast Link Pulse. A Fast Ethernet-capable node can detect a legacy node that does not support autonegotiation and subsequently revert to sending standard Normal Link Pulses for basic link maintenance.
While Fast Ethernet (100BASE-TX) is generally not deployed in new network installations, network professionals may still encounter and need to maintain it in legacy installations—older networks that have not been fully upgraded.
Gigabit Ethernet Standards
Gigabit Ethernet represents a significant step forward in network speed, building upon the foundational standards of both original Ethernet (10BASE-T) and Fast Ethernet (100BASE-TX) to achieve data transfer rates of 1,000 Mbps, or 1 Gigabit per second (1 Gbps). When implemented using Category 5e (Cat 5e) or higher-rated copper wire, Gigabit Ethernet is typically specified as 1000BASE-T. A key difference from earlier hub-based Ethernet is that Gigabit Ethernet does not support hubs; it is exclusively implemented using network switches. The maximum cable distance of 100 meters (328 feet) remains applicable for cabling between a network node and a switch port, as well as between two switch ports.
Gigabit Ethernet has become the mainstream choice for new installations of access networks, which refers to the cabling infrastructure connecting end-user workstations and devices to the local network. In these modern deployments, the primary decision often revolves around whether to utilize copper cabling or fiber optic cable. Fiber optic cabling offers greater potential for future bandwidth upgrades and is less susceptible to electromagnetic interference, while copper cable generally has a lower initial installation cost, and a vast majority of end-user devices are equipped with network interface cards (NICs) that support copper connections. Fiber optic connections on end-user devices are less common.
10 Gigabit Ethernet (10 GbE) further increases the nominal speed of Gigabit Ethernet by a factor of ten, achieving a data rate of 10 Gbps. Due to the significantly higher signal frequencies involved, 10 GbE has more stringent requirements regarding cable quality and can only operate at reduced distances over unshielded copper cable. Longer cable runs necessitate the use of higher categories of copper cable with shielding or the adoption of fiber optic cable. Standards also exist for even higher speeds, such as 40 Gigabit Ethernet (40 GbE).
10/40 GbE Ethernet is not yet widely deployed in typical access networks connecting end-user devices, primarily due to the higher cost of compatible network adapters and switch transceiver modules (optics). However, it finds application in environments where a company's operations demand very high-bandwidth data transfers, such as in the television and film production industries. Additionally, 10/40 GbE is commonly used as backbone cabling, providing high-speed links between network infrastructure devices like switches and routers, or between critical appliances within a data center. Backbone cabling forms the high-capacity core of a network, interconnecting different network segments.
Fiber Ethernet Standards
Fiber optic cable utilizes pulses of infrared light to transmit data. A significant advantage of fiber optic transmission is its immunity to electromagnetic interference and noise from external sources. Furthermore, fiber optic signals experience significantly less attenuation compared to electrical signals in copper cables. Consequently, fiber optic cable can support much higher bandwidth over considerably longer distances than copper cable.
Fiber optic cabling is broadly classified into two main types: single-mode fiber (SMF) and multimode fiber (MMF). Multimode fiber is further categorized by optical mode designations (OM1, OM2, OM3, and OM4), with each OM grade specifying different optical fiber characteristics and supporting varying bandwidth capabilities over specific distances.
Ethernet standards for fiber optic media define the types of fiber optic cable and associated optical transceivers (optics) used for different data rates, including 100 Mbps, 1 Gbps, 10 Gbps, and 40/100 Gbps operation. These standards often include variations for long wavelength (LX) optics, which are necessary for long-distance transmissions, and short wavelength (SX) optics, used for shorter links.
Fiber optic cabling is frequently employed for backbone cabling in office networks and for connecting workstations with demanding high-bandwidth requirements, such as those used for video editing and other media-intensive tasks. The primary applications of 10 Gigabit Ethernet (10 GbE) and even faster fiber-based Ethernet technologies include:
- Significantly increasing bandwidth for high-speed server interconnections and network backbones, particularly within data centers and for Storage Area Networks (SANs).
- Replacing existing public data networks based on proprietary technologies with simpler and more cost-effective Ethernet-based solutions, often referred to as Metro Ethernet.
While the array of Ethernet standards might seem complex, grasping these fundamentals is crucial for anyone involved in designing, implementing, or troubleshooting network infrastructure. Knowing the capabilities and limitations of different Ethernet technologies—from copper to fiber, and from Fast Ethernet to 10 Gigabit and beyond—empowers you to make informed decisions and build robust, efficient networks.
Thank you for taking time in your day to read my notes. I hope you find them helpful with your networking journey and taking your Net+ exam! Next we will start to explore the different copper cables and connectors that you will need to know for the N10-009 exam. Hope to see you there!