Comptia Network

  • Uploaded by: thanathos1233699
  • 0
  • 0
  • February 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Comptia Network as PDF for free.

More details

  • Words: 90,009
  • Pages: 217
Loading documents preview...
© Copyright 2020 - All rights reserved. The content contained within this book may not be reproduced, duplicated or transmitted without direct written permission from the author or the publisher. Under no circumstances will any blame or legal responsibility be held against the publisher, or author, for any damages, reparation, or monetary loss due to the information contained within this book. Either directly or indirectly. Legal Notice: This book is copyright protected. This book is only for personal use. You cannot amend, distribute, sell, use, quote or paraphrase any part, or the content within this book, without the consent of the author or publisher. Disclaimer Notice: Please note the information contained within this document is for educational and entertainment purposes only. All effort has been executed to present accurate, up to date, and reliable, complete information. No warranties of any kind are declared or implied. Readers acknowledge that the author is not engaging in the rendering of legal, financial, medical or professional advice. The content within this book has been derived from various sources. Please consult a licensed professional before attempting any techniques outlined in this book. By reading this document, the reader agrees that under no circumstances is the author responsible for any losses, direct or indirect, which are incurred as a result of the use of information contained within this document, including, but not limited to, — errors, omissions, or inaccuracies.

Table of Contents

CompTIA Network+ A Comprehensive Beginners Guide to Learn About the CompTIA Network+ Certification from A-Z Introduction Chapter 1 : Preparation Tips for CompTIA Network+ Examination Chapter 2 : Introduction to Networks and Networking Features of a network Hosts, Workstations, and Servers LAN, MAN, WAN Types of networks Extranet, Internet, and Intranet Chapter 3 : Components of a Network Coaxial cables Twisted pair cables Fiber optic cables Serial cables Characteristics of cables Chapter 4 : Networking Devices Hubs Modem Repeater Basic router Switch Bridge Network cards Transceivers Wireless access point Dynamic host configuration protocol server Firewall Intrusion Detection or Prevention System (IDS/IPS) Domain name server (DNS)

Segmenting networks Chapter 5 : Open Systems Interconnection Model - OSI Layered communication Importance of using reference models OSI reference model Chapter 6 : Internet Protocol Process/Application Layer Host-to-host Layer Internet Layer Network Access Layer Chapter 7 : IP Addressing Addressing Scheme Network addressing Unique IP addresses Private IP addresses Internet Protocol Version 6 – IPv6 Types of addresses Chapter 8 : Wireless Technologies Components of a wireless network Setting up a wireless network Factors to consider when installing a large wireless network Chapter 9 : Network Management Practices Importance of network documentation Baseline documentation Schematics Procedures, policies, and regulations Performance optimization Monitoring your network Importance of optimizing network performance Procedures for optimizing network performance Chapter 10 : Network Standards and Protocols NetBEUI IPX/SPX AppleTalk

TCP/IP Routing protocols Command Line Tools Chapter 11 : Mitigating Network Threats Identifying threats How attacks happen Protecting your network Chapter 12 : Managing and Troubleshooting the Network Check the network to ensure all the simple things are okay Determine whether you have a software or hardware issue Determine whether the issue is localized to the server or workstation Find out the sectors in the network that are affected Check the cables Troubleshooting a wireless network Procedure for troubleshooting a network Conclusion

CompTIA Network+ Tips and Tricks to Learn and Study about The CompTIA Network+ Certification from A-Z Introduction Chapter 1 : Overview of Networking Technologies Chapter 2 : IoT Device Networking Chapter 3 : The OSI and TCP/IP Model Chapter 4 : Ports, Protocols, and DNS Chapter 5 : Addressing and Routing Chapter 6 : Network Devices Chapter 7 : WAN Technologies Chapter 8 : Wireless Solutions Chapter 9 : Cloud Visualization and Computing

Chapter 10 : Network Operations Chapter 11 : Network Security Chapter 12 : Network Troubleshooting Chapter 13 : Hardware and Software Troubleshooting Tools Chapter 14 : Troubleshooting Common Network Service Issues Conclusion References

CompTIA Network+ Simple and Effective Strategies for Mastering CompTIA Network+ Certification from A-Z Introduction Chapter 1 : How Do I Get My CompTIA+ Network+ Certification? Why CompTIA+? Why Do I Need It? Is It Too Late To Start? Chapter 2 : Tips On Taking The Exam Things You Can Do During The Exam Chapter 3 : Further Into Topologies Coaxial Cable Twisted-Pair Cables UTP Cable Connecting Fiber-Optic Cables Optic or Fiber? The Importance of Topology Selection What is the Network Backbone? Hardware in Topologies Topologies in CompTIA Network+ Chapter 4 : Ethernet Specifications Elements of Ethernet Half and Full-Duplex Ethernet The Data Link Layer

Ethernet Frames The Physical Layer Ethernet in the CompTIA Network+ Exam Properties of Cables Installing Wiring Distributions Verifying Correct Wiring Installation Chapter 5 : Internet Protocol (IP) TCP/IP and the DoD Model The Process/Application Layer Protocols Chapter 6 : Protocols In The Remaining Layers The Host-to-Host Layer Protocols Most Important Ideas of HtH Protocols Protocols and Layers in CompTIA Network+ Chapter 7 : Software And Hardware Tools Understanding Network Scanners Chapter 8 : Network Troubleshooting Narrow Down The Problem Reproducing The Issue Hardware or Software Issue? Workstation Or Server Problem? What Parts Of The Network Are Having Problems? Other Cabling Issues You’ll Want To Know About General Steps Conclusion

COMPTIA NETWORK+ A Comprehensive Beginners Guide to Learn About the CompTIA Network+ Certification from A-Z

WALKER SCHMIDT

Introduction If you are just getting into the world of certifications, you have made one of the best decisions of your life. Certifications open avenues for you to succeed in so many ways that you never would have imagined. By attaining a special certification, you stand a chance of getting selected for a position more than anyone else in the same category. A certified expert is more valuable to the organization. There is a lot of study material that you will get through before you get certified. You will have to sit through exams to prove you have what it takes to make it in networking. One of the best things about Network+ certification is that everything you learn will be useful in your career. This book helps you prepare for your exams, by reminding you some of the facts that you might have forgotten, or taken for granted. Your exam will feature multiple choice questions. This is the same style that you might have used back in school. Most students assume that multiple choice questions are easy, and this is the first mistake they make. Much as multiple choice questions have the correct answers in front of you, you must think outside the box. Network + exams attempt to gauge your preparedness to tackle a real-life scenario. Because of this reason, you will encounter a lot of questions that go beyond cramming acronyms and facts, to addressing factual representation or application of the knowledge learned. Once you go through this book, you have what it takes to get you through the introductory aspect of Network +. This book gives you the basics, the foundation upon which you can build on to pursue other critical certifications in networking. Why do you need Network + certification? Just like you would for other certification programs like Oracle and Lotus, CompTIA Network + prepares you for the future by making sure that you are not just a skilled employee, but a skilled and duly certified employee. You become an important asset to whichever organization you are affiliated with. While studying and preparing for your exams, you will notice that the questions often test different things. Do not just focus on answering the questions, but on skill and knowledge transfer. This is why you need a lot of practice before you are ready for an exam. Practicing often will help you look at the questions posed to you in an analytical capacity, and approach them from a problem-solving perspective.

Chapter 1 Preparation Tips for CompTIA Network+ Examination

CompTIA certification is mandatory for most of the top corporate jobs that involve networking or network security. With this certification, you have a better shot at an interview, and proving your worth in a hands-on capacity. Passing the exams is no mean feat either. Even some of the top professionals in the industry admit that this is one of the exams that gave them sleepless nights. To prepare adequately for the exam, you must first understand what the exam structure looks like. You will be tested with multiple choice questions and performance-related questions. It is not easy to tell how many questions will come from either type, because each exam is set different from the last one. The exam might be different all the time, but some useful tips that have helped many professionals succeed over the years will still apply today. Knowledge of these tips will help you smoothen the preparation process, and face the exam confidently. The following is a brief guide that will help you prepare for your CompTIA Network+ exam. Understand performance-related questions Performance questions demand that you carry out a specific task. The idea behind such questions is to determine whether you can use the knowledge gained to solve problems in a real scenario. If you can hack performance-based questions, there is a good chance you will perform well in the workplace. This is because you provide real solutions to real life problems in real time. Given that the nature of the business environment today demands more hands-on experience from employees, you can expect the focus of CompTIA Network+ examination in the near future to shift more towards a problem solving approach, hence an emphasis on performance-related questions. Time management Learn how to manage your time well. Do not spend more time on a question than necessary. If you cannot handle the question right away, mark it and get back to it later. Multiple choice questions should be easy for you to handle because you do not have to fill in the blanks or write an essay about the question. They are true or false questions, which means that the answer is already presented. The secret to hacking such questions is to familiarize yourself with the technical element of CompTIA Network+. Mastery of the knowledge jargon will go a long way. Using CompTIA resources When preparing for the CompTIA exam, make use of the resources available on their website. You will find these coming in handy as you prepare for the exam. Some of the useful resources include the list of acronyms and exam objectives. Familiarizing yourself with the list of acronyms makes your work easier when you come across relevant questions.

Building a network Whether you are sitting for a CompTIA Network+ or A+, one of the most important things you must do is learn how to set up a network or a computer. It might seem like too much work at the beginning, but it is an important skill. Building your network from scratch helps you understand the critical elements in practice, and improve your mastery of the theoretical approach. It is easier to understand something that you create on your own than reading literature on something that someone else created. Spare time for practice CompTIA offers a lot of practice material that you can use when preparing for your CompTIA Network+ exam. From objective questions to practice tests, you have a lot of material to work with. One of the best ways of going about such material is to focus on the sections that you feel you struggle with. Go over the sections that you constantly fail when revising or reviewing your answers. These are the ones where you must increase your effort to give you a better shot at passing the exam. Vocabulary mastery When preparing for or sitting in your exam, you must be careful about questions that contain words like BEST, LEAST or MOST . More often than not, all the answers provided are correct. However, you need to choose one that corresponds to the specificity of the question asked. Learn from communities There is a vibrant community of experts, professionals and students online. This is a community that will prove quite the resource when you are studying for your exam. Joining such a community is a good thing because of the wealth of experience available in there. Besides, the CompTIA exams are set based on the current curriculum, which addresses real-life scenarios. Why is this important? Perhaps the study guide you have been using was written some years back, so in as much as you are ready for the exam, you might not be exposed to current affairs. From such a community, you will be well-versed with things that happen on a daily basis in the networking field, and you can also exchange ideas with the community members. Plan for the exam Think of the CompTIA Network+ exam like a marathon. You cannot start preparing for a marathon one week before the race day. Preparation takes months. The earlier you start preparing for the exam, the lighter your study or revision burden will be. You will have covered a lot of study areas by the time your exam date is due. Starting early also allows you to create sufficient time to revise everything you have learned over time. You will also have noticed some of the sections that you believe are very difficult. For such areas, dedicate more time towards them so that you can understand them better. Take advantage of related questions You will often come across questions later on in the paper that can help you answer some that appeared towards the beginning. This is a common scenario that you should expect. Be steadfast in your answers. Otherwise, such questions can make you doubt your choices in the earlier question, and

change a correct answer. These questions, however, can also help you reflect and remember the correct answer, especially when the questions are related in some way. If you are ever in doubt, mark and skip the question and get back to it later. Prepare adequately While preparing for the exam, it is easy to get carried away and forget about pampering yourself. You need to be in the right frame of mind to pass this exam. First, make sure you are mentally prepared. Get sufficient rest, drink water, and eat properly before the exam. You want to walk into the exam room without a hint of stress. Some people like to cram at the last minute before they get into the exam room. This might be effective, but it can also be counter-productive. The problem with last-minute cramming is that it is often a sign of ill-preparedness. Know the location of your exam center, and make advance preparations if you are to travel to a different location. You do not want to be stuck in traffic, or held up at the airport because of airline delays and cancellations. If you are sitting for your exam in a new location, you must also allow enough time to familiarize yourself with the location, just in case you get lost in the process. More importantly, remind yourself that you have come this far, and you have everything under control. Types of Questions There are different ways of testing you in a Network+ exam. When sitting for your certification exam, you will be tested on any of the following types of questions: ● Multiple choice Most of the questions in a Network+ certification are often multiple choice questions. The questions might have a single correct answer, or more than one correct answer for you to choose from. This is where your knowledge and application comes in handy. While some questions might ask you to choose the correct answer, some might ask you to choose all the answers that you feel are correct. You must, therefore make sure you think carefully before settling on an answer. ● True or false True or false questions are very easy. Each answer has a 50% chance of being right or wrong. You might not expect to find such questions in the certification exam. However, the questions can appear in a different format. You might be given a multiple choice question whose answers might be true or false. In this case, you have to deduce which of the answers you feel is appropriate for the question asked. ● Graphical illustration Graphical illustrations are used to emphasize a point. In class, you learn to use these illustrations to enhance your understanding of some concept. In a Network+ exam, these questions might be presented to clarify a question. You can be tested in the form of a network diagram or a set of pictures that represent a working network system. Some unique questions use the graphical illustration format to test your knowledge.

● Free response Free response questions are very rare in a Network+ exam. This is a question where you are expected to provide an answer in your own words. Your preparedness for the job market The Network+ examination tests, among other things, your ability to manage time effectively and perform tasks related to your work. Performance based questions will feature in your exam. However it might not be feasible for CompTIA to create an ideal laboratory situation where you can test your skills. The logistical cost of getting each candidate to the laboratory would not make sense. Instead, CompTIA takes steps to make sure that they can test your preparedness for the job market in a different way. They create programs that will test your ability to accomplish certain tasks, and you are graded based on the same. When sitting for a test, you launch a simulation program which operates the same way the real-life situation would. Simulations make the exam more realistic, and test you on things that are closer to problems you would encounter in a normal work environment. One of the reasons why simulations are becoming popular is because they eliminate the risk of cheating in an exam. Everything you are expected to do is communicated in the test. It is impossible to cheat.

Chapter 2 Introduction to Networks and Networking

One of the most important skills to help you pass a CompTIA Network+ exam is to understand how computers communicate with one another in a given environment. Other than passing the exam, this skill will also be useful to you as a networking expert when troubleshooting problems with the network in a work scenario. In this chapter, you will go through a basic introduction to networks, the components, terminologies, and tips that form the basis upon which your knowledge of networking will be built. This chapter basically forms the foundation of your knowledge about CompTIA Network+.

Features of a network Over the years, the cost of networking devices like home routers and hubs has become more affordable, and as a result a lot of people can and are creating small networks either at home or in their small offices. Today you can create a small network from your smartphone. As a certified Network+ expert, you must understand the relevant terminologies to help you master and offer support for such networks. What is a network? A network refers to a group of systems which are connected to one another for the sole purpose of sharing resources. Resources could be anything from printers to files. Resources could also refer to services, like an internet connection. A network is built around two important features, software and hardware. Software is installed on the computers and devices within the network, allowing them to communicate effectively with one another. Hardware, on the other hand, refers to the physical machines and tools needed to complete the network. Network hardware is composed of two important parts, the medium through which information is shared across the network, such as a wireless medium or cable, and the entities who need to share the resources and information across the network, such as workstations and servers.

Hosts, Workstations, and Servers In a default simple network, the user has access to the workstation, through which they can access different applications, like a spreadsheet, email service, or word processor. In networking, the workstation is referred to as the client . A workstation, therefore, is simply a computer running whichever operating system you install on it. When they access a workstation, users share files that are stored in the central server, with others on

the network. The server is a unique, and special computer in the network that has more storage space and powerful memory than all the other computers. This computer is resource-intensive, hence the need to make sure it is more powerful than all the other client workstations, to support the entire network. Any computers or devices that connect to a network and communicate on the said network are known as hosts. A host, from this explanation, can be a printer, scanner, workstation, a server, a router, or any device that uses a network card.

LAN, MAN, WAN While learning about networks, you will come across LAN (Local area network) , MAN (Metropolitan area network) and WAN (Wide area network) often. What is the difference between these network types? LAN is a network type that is restricted to one building. It could be the network at home, in your office or your class in college. WAN is a network type that covers several locations. WAN is basically a network of LANs. Take the example of a business that sets up multiple offices in different cities. To ensure that these offices all have access to the same set of information, each of their LANs would be connected to create a WAN. MAN is a network type that only exists within a metropolitan area or a city. An example of a MAN is a situation where you have two buildings within the same town. These buildings would be connected together through a MAN.

Types of networks There are different types of networks. Each organization or entity uses the network type that is suitable to their immediate needs. This is why you would find a learning institution running a different type of network compared to the local insurance company. There are basically two types of networks: Peer-to-peer networks Server-based networks Peer-to-peer networks (P2P) The term P2P is fairly common today, with many systems and applications using it to define the way they operate. A P2P network basically runs without a dedicated server. In place of a server, each of the workstations that are connected on the network share devices or information. The absence of a server in this network means that all the connected devices on a P2P network have and share equal access to network resources. In a P2P network, each workstation assumes the double role of a server and client. A P2P network is often useful for small offices, homes, and personal networking needs, where it does not make financial sense to purchase a dedicated server, but at the same time, still fulfill the information and device sharing needs of staying connected to a network. If you work in a small insurance firm that has only three computers, a P2P network would make sense.

You can connect the computers to the printer and any other devices that you need to run the business on a daily basis. You can also share information about your insurance customers across the network. Such a set-up does not warrant the need for a dedicated server, which is often very expensive considering its computing resource requirements. A typical P2P network should have no more than ten systems connected to it. If you use a Windows device, you will come across the term Workgroup, which is the connotation that Microsoft uses for a P2P network. You will also notice that Windows operating systems like Windows 10 are designed to support P2P networking by default. The network settings are built-in, making your work easier. One of the challenges of this type of network is the fact that there is no central control or administration. Since each client on the network has equal rights, you must configure security features on each of them independently. You must also create user accounts for each of them. Server-based networks Server-based networks address one of the main challenges of running a P2P network, administration. Let’s look at this from the perspective of a P2P network. You must create user accounts and set security privileges for each of them independently. One of the challenges with this is that you will end up with a situation where files are scattered all over the network. As a network administrator, you may have a very difficult time managing the network or the resources available to you. While a P2P network should support up to ten devices, once you have more than four devices on the network, each of which are actively sharing information and acting as data stores, the need to have a central server becomes imminent. This is what a server-based network is all about. In a server-based network, all the files and data are stored in one location; in the server, where everyone can access them. Because of this central location, a network administrator has an easier time managing resources because all you have to do is set permissions and unique parameters for accessing the files from the server, instead of doing so on each client on the network independently. You also have a list of all the users on the server who have access to the network. A server-based network also makes work easier for backup and recovery in the event of data loss. The role of a server in this type of network depends on the services that you need. A server on such a network can offer different levels of utility, depending on the role it serves on the network. Some of these include: ● File and print servers The role of these servers is to manage the use of printers and files shared between clients on the network. A file and print server makes work easier in a situation where you have a lot of clients who need to access files and printers in the organization. File and print servers will often have any of the following features; very fast hard disks, large memory, redundant power supply, fast network adapters, fast input/output buses, and multiple CPUs. ● Application servers Application servers are unique as they are tasked with running a unique program on the server. They do not do anything else other than what they are intended to do. Some examples include the Microsoft SQL Server and an email server.

● Web servers Web servers are built to allow access to information on the internet. They specifically allow you to publish information online. They run HTTP (Hypertext Transfer Protocol). Web servers have become an indispensable part of the modern business environment because it is in them that web applications and websites are hosted. Web servers can host applications built for internal use (intranet) or information that is shared with the rest of the world on the internet. ● Directory servers Taking a hint from the name, a directory server contains a list of all user accounts granted permission to log into the network. The list is held within a database, referred to as the directory database. The database holds contact and identifier information about the user accounts, like the address, fax number, mobile phone number, and so forth. Managing a server-based network is easier because the directory server contains all the information about user accounts. If you connect to the network through any client, your sign in request is run through the directory server. Your client will only be allowed access to the network if the sign in credentials are recognized and accepted. It is also important to mention that one server can assume multiple roles instantaneously. You can use the same server as an application server, file and print server or directory server at the same time. Given this consideration, you do not need to worry about buying a new server whenever you add a new feature or implement something new on the network.

Extranet, Internet, and Intranet The terms extranet, internet and intranet are used to explain the type of applications you use. You, therefore, need to know how to identify and tell them apart. The Internet is used to share information with the rest of the world. To do this, you need an internettype application that runs on SMTP, FTP, or HTTP. These are internet protocols that are available all over the world. The Intranet is confined to a company. It is an internal network that cannot be accessed by anyone else outside the company parameters. Applications connect to the intranet through FTP or HTTP. Any information on the company intranet is inaccessible to anyone outside the company. While the information on the intranet might be accessible on a web browser, it is an internal network and you would not be able to access it outside the company network. The Extranet refers to a situation where an application is designed for use by internal company employees through the intranet, but must also be accessed by a select group of customers or business partners. The extranet, therefore, is a situation where access to the intranet is extended to select individuals outside the company.

Chapter 3 Components of a Network

It is important to know what makes up a local area network. From network routers, switches, hubs to network cards, all these are important components that make up the network, the knowledge of which will not just help you in passing your Network + exam, but also help you as you go about tasks in the workplace. A lot of the conventions related to networking computers and systems still remain the same as they were since the 1980s. However, what might have advanced is the processes and procedures. Technologies and systems have evolved in response to changes in computing and the need to make the number of connections you need to your computers as minimal as possible. One thing that most users look forward to is fast, error-free communication, hence some of the changes that we have experienced over the years. All technologies depend on some form of physical media. Even wireless technology relies on some physical interface somewhere to make it effective. Most of the LANs you come across today are connected through cables. There are three types of cables used for networking today: Coaxial cables Twisted pair cables Fiber optic cables

Coaxial cables A coaxial cable, also known as a coax has a central conductor built from copper, enclosed in a plastic jacket. The entire cable is shielded by a braiding. The shield is covered by PVC (polyvinyl chloride) or FEP (fluoroethylene propylene). Cables covered by FEP are often Teflon-type, also known as plenum-rated coating. Plenum-rated coating is very expensive, but is preferred because it meets the local fire standards especially when the cabling is passed behind walls or through the ceiling.

Twisted pair cables Twisted pair cables are made up of several cables insulated individually, then twisted together to form pairs. In some cases, they are covered by a metallic shield, and referred to as shielded twisted pair (STP) . A twisted pair cable without the shield is referred to as an unshielded twisted pair (UTP) . These cables are commonly used for ethernet cabling. There are different descriptions that are used for ethernet cables, each of which has a code. The code for ethernet cables is written in the format below: N<Signaling>-X

Where, N – signaling rate measured in megabits/second. <Signaling> - the type of signaling, either broadband or baseband. X – unique ethernet cabling scheme identifier Why is it important that all the wires used for such cables are twisted? Electromagnetic waves on copper wires can cause interference when the wires are close to one another. This is called crosstalk. Twisting the wires, therefore, reduces the risk of this interference, and apart from that, protects the cables from interference from external sources. Other than that, twisted pair cables are also preferred because they are very easy to work with, are more affordable than most of the other cabling types, and allow fast communication. Unshielded twisted pair cables are classified as follows: Category 1 – Made of two twisted pairs, and is preferred for voice communication. It is limited to 1MHz frequency. Category 2 – Made of four twisted pairs, with a frequency limit of 10 MHz, and can transfer data up to 4Mbps. Category 3 – Made of four twisted pairs, with three twists after each foot of cable. It is limited to 16 MHz, and can support up to 10Mbps. Category 4 – Made of four twisted pairs, with a frequency limit of 20MHz. Category 5 – Made of four twisted pairs, with a frequency limit of 100MHz Category 5e – Made of four twisted wires, rated up to 100MHz. One of the main differences between Category 5 and 5e is that this can transmit on each of the four pairs at the same time, without any disturbance. This is a prerequisite for Gigabit Ethernet. Category 6 – Made of four twisted wires, and rated up to 250 MHz. At the moment, any classification that precedes 5e is either redundant or obsolete in the modern networking environment.

Category 5e cable Source: https://www.cables.co.za/utp-cat5e-cable.html It is very difficult to fit BNC connectors to a UTP cable. To solve this problem, you can use a registered jack connector (RJ).UTP cables use RJ-11 cables in case the device uses four wires, or RJ-45 if the device uses four pairs.

RJ-45 and RJ-11 Source: https://www.leroymerlin.fr/v3/p/produits/cable-rj45-rj11-male-male-evology-3-me1400149619 To use these connectors, you need a crimper to attach them to your UTP, in the same way you would for BNC connectors. While the die that holds the connectors might have a different shape for UTP connectors as compared to BNC connectors, today you have quality crimping tools whose dies are interchangeable, and can be used for either type of cable. You will come across RJ-45 in many LAN connections. RJ-11s, on the other hand, are common in digital subscriber link (DSL) connections.

Fiber optic cables While most of the cables transmit signals through electricity, fiber optic cables transmit signals through light impulses. This mode of transmission is preferred because the cables are immune to interference from RFI and EMI. Fiber optic cables transmit the light impulses through plastic or glass cores. While glass is ideal because it allows transmission over a wider distance, plastic is preferred in most cases because it is

affordable. Whichever the core that is used for fiber optic cables, it is still protected inside a plastic or glass cladding with a different refraction index from the core, which helps to bounce the light back within the core. Fiber optic cables are either multimode fiber (MMF) or single-mode fiber (SMF). The two modes are differentiated by the number of signals that they can transport. While SMF is preferred for long distance transmission, MMF is ideal for applications that require transmission over a short distance. Fiber optic cables might have been touted as the next best thing since sliced bread, but it does have its pros and cons too. While fiber optic can transmit information up to 40 kilometers and is safe from interference through RFI or EMI, it is one of the most expensive cabling methods, especially when compared to twisted pair cabling. Fiber optic cable installation is also not an easy process. The cost of repair or troubleshooting is also very high compared to twisted pair cabling. Another challenge for using fiber optic cables is that troubleshooting problems is not easy. SMF uses laser and LED to transmit and carry the signals over a long distance. To enable communication, the source of light is pulsed through the cable. SMF cables can transmit data at a faster rate compared to MMF, and at a distance more than 40 times. MMF, on the other hand, uses light for transmission, but instead of pulsing it through, it is reflected through the core from different paths upon which it is dispersed. To focus the light back into the core, the core is lined with a special cladding. MMF is ideal for high speed bandwidth in medium range of around 2,000-3,000 feet. Anything more than this can introduce inconsistencies in transmission. Therefore, this also explains why MMF is preferred for connections that run within one building, while SMF is ideal for connections that run across multiple buildings. SMF is primarily available in glass core, making installation quite a challenge. Other than that, it must never be pinched or crimped to circumvent a tight corner. MMF, on the other hand, is available both in glass and plastic. Installation is relatively easier especially with plastic, which makes it a more flexible solution.

Serial cables Serial in networking refers to a scenario where one bit is transmitted after another through the connecting cable, and the communication is interpreted at the end where it terminates either on a NIC or a different interface. There are several types of serial cables. Recommended Standard 232, (RS232) is often used to connect data communications and terminal equipment together. Most devices today do not have RS-232 connectors, and instead, they have been replaced by FireWire and USB connectors. USB is the ultimate connector built into most motherboards today. There is an endless list of devices that you can connect to a computer through the USB port. While most devices come with a maximum of 4 external USB slots, you can get an adapter. Most adapters max out at 16 interfaces. By design,

USB can support connections of up to 127 external devices.

Characteristics of cables Considering the different types of cables that can be used on a network, what are the unique properties that you should follow when choosing a specific cable over the others? The following are the main features that you must consider: Frequency Cables have a set frequency within which they can transmit bandwidth. Category 5e cables, for example, can transmit up to 100MHz, and at the same time, can also transmit up to 1Gbps over moderate distances. A category 6 cable, on the other hand, can max out at 250MHz, and will handle 1Gbps without any challenges. Considering that category 6 cables feature thicker cables and more twists, they are ideal for connections between different floors in a multi-story building. Immunity A magnetic current is formed whenever electrons travel through two adjacent wires. This current is a good thing because it creates a magnetic flux. Magnetic flux is necessary to power the computers we use. However the power that surges through this current also brings forth a few concerns. First, since the wires are generating current, anyone who has the right equipment can intercept the message without interfering with the wires physically. This creates a security issue. Some of the high profile establishments protect their installations by casing the communication wires within lead shielding. Second, wires have the potential to adopt any current from around if they are anywhere near a magnetic source. Therefore, it is advisable to keep all wires as far away from strong magnetic sources like speakers, motors and amplifiers, to avoid EMI. Distance The distance between the key components of any network will also help you determine the type of cable you need. While some cables can run further than others without glitches in communication, all networks will suffer attenuation at some point. Attenuation is signal degradation as a result of the distance the signal must travel, or the communication medium. Duplex A communication platform can either be full duplex or half duplex. In half duplex communication, the device can either receive or send communication in one instant, but never both. This works like a walkie-talkie. Full duplex communication, on the other hand, is a situation where the devices can send and receive communication instantaneously. Full duplex doubles the effective throughput, and as a result, makes communication highly efficient.

Transmission speed Network administrators can manage the network speed depending on the type of network and cable or fiber, to ensure that the network meets the traffic demand. Most administrators apportion maximum speeds in the core areas of the network, up to 10Gbps, and allow up to 10Mbps in segments where the network connects to switches, especially for basic access and distribution areas.

Chapter 4 Networking Devices

By now, you must be well aware of most of the network connections and media that you come across from time to time. These connections originate or terminate in certain devices. The devices are referred to as connectivity devices because they connect to some network entity. In this chapter, we will discuss as many of the networking devices that you might come across as possible.

Hubs The hub is a device where all the elements of an ethernet network are connected. Each device is connected to the hub through a cable. Through the hub, devices can connect to one another without segmenting the network. Any form of communication from a device is sent out to all the parts that are connected to the hub. This is to make sure that the CSMA/CD (carrier sense multiple access with collision detection) can assess the transmission for any collisions.

Source: https://www.amazon.com/D-Link-including-Charging-Adapter-DUB-H7/dp/B0000B0DL7

The role of a hub, therefore, is to ensure that all devices connected to the hub receive the same information. However, not all the devices will listen to the information. Only the device intended to receive the information can listen to it, according to the address in the information frame. While hubs are useful, they have some challenges that are rendering them obsolete especially in corporate environments. Hubs broadcast communication from one device to all the other devices that it hosts. As a result, there is always a risk of collision, hence hubs are notorious for network collisions in any LAN with a lot of users.

Modem The role of a modem is to modulate digital data to analog carriers, allowing transmission over an analog medium. Upon termination, the data is then demodulated to a digital signal for the recipient. Other than the description, a modem is actually no more than an acronym for Modulator/Demodulator . There are three types of modems you might have come across today: Cable Cable modems are popular because they offer high speed technology for internet access. Through cable modems, you can connect any device to the internet, especially a network or an individual computer using the TV cable. Most TV companies today use the pre-existing cable infrastructure to offer their clients data services on the frequency bands that are not utilized. Cable modems feature a simple build. They come with an ethernet port and a coax connector at the rear end. DSL The digital subscriber line (DSL) is preferred over the conventional modem because it is an affordable way to offer high data throughput. One of the benefits of using DSL is that you can still access your regular calls online. Traditional modems A traditional modem, (plain old telephone service – POTS line) converts your computer signals into a package that can be transmitted through POTS. Most of the modems in use today are POTS because computer manufacturers have them embedded into the device motherboards.

Repeater Repeaters are not so different from hubs. They can also be used to connect UTP connectors, adding your ethernet segment a 100-meter gain. It is not advisable to use repeaters in networks, however, because of latency. If you can, you can use a wireless network instead of a repeater without worrying about adding latency to the connection or losing bandwidth in the process. The same applies to hubs.

Basic router A router is a networking device that allows you to connect several network segments together, in the process creating an internetwork. Routers can be intelligent, programmed to determine the most

efficient way of networking and transmitting data to the destination. Such intelligent routers make decisions based on information gathered over time about the data performance on the network. You are conversant with a normal SOHO (small office, home office) router. A SOHO router allows host connection to the internet through wireless or wired connection without additional configuration. These routers come with default configurations which you can use, but it is advisable to change them to personalized and secure credentials. Anyone can access the default access credentials for your router online if they know the make or model.

Source: https://www.harveynorman.com.au/dlink-ac1200-unified-wireless-router.html Some routers are very complex, and come complete with their unique operating system. This is something you will find especially with Cisco routers, that run the Cisco IOS. Such routers have a CPU to help them process and route data packets efficiently and in a secure manner. Given that such networks are intelligent, they can be programmed to perform other duties that you would expect of unique devices on your network, like firewall services. To do this, you activate or implement a given feature that is already built into their firmware.

Switch A switch is just as common a component in a modern network as hubs are. The average user can confuse a switch for a hub. However, there are distinct features that tell them apart. Switches recognize the originator and destination MAC addresses for each frame, and the ports where the frames are delivered, while hubs cannot do that. Hubs simply send everything they receive to all devices that are connected to them.

Source: https://www.ebuyer.com/704518-netgear-gs108e-prosafe-plus-8-port-gigabit-ethernetswitch-gs108e-300uks

Bridge The role of a network bridge is to connect similar segments of a network together. The idea behind this separation is to prevent domain collision within the network by separating traffic on both sides of the bridge. When a bridge is used, traffic can only pass through it if the transmitter intends the transmission to be received on the other side of the bridge. A bridge comes in handy if you need to separate a very busy network into two segments and manage traffic accordingly. Bridges and switches use similar bridging parameters, though bridges run as software.

Network cards Network cards are also known as network interface cards (NIC). The role of a network card is to enable your system to communicate with other devices on the network, by sending and receiving data. The NIC converts data into electronic signals which are then conveyed through electronic media, and into a format that the system can recognize. NIC provides electronic, electrical and physical connections between your device and other networked media. You will also come across the NIC referred to as a network adapter. Most devices today come with the NIC pre-installed into the system. However, for older machines, you can purchase the adapter and install it into the system. A device that comes with the NIC pre-installed has an integrated network

card. The network card is embedded into the system’s motherboard.

Source: https://en.wikipedia.org/wiki/Network_interface_controller#/media/File:Network_card.jpg For laptop computers, the NIC is often located on one of the sides, and at the back for most desktop computers. For NICs that can be used as add-ons, you can plug them into the device through the USB drive, or through the expansion bus available on the computer. There are so many expansion slots that can be built into a computer. Your task is to make sure the expansion slot and the network card are compatible. The following are some of the common expansion slots that you might come across: PCI AGP PCMCIA ISA

EISA VESA MCA By design, a PCI card is not compatible with an MCA slot, and so forth. You must match the card type and the expansion slots.

Transceivers From the nomenclature, you can already sense a transceiver has something to do with communication. It is one of the network components that is tasked with receiving and transmitting signals across different media. The transceiver picks up signals and affirms that they do belong to the local system. A transceiver is also known as a media converter. In case the data does not belong to that system, the transceiver discards it. However, if it belongs to the system, it is passed along for processing. You can either have an external or an onboard transceiver. The transceiver allows your NIC or any other device for that matter, to connect to a different media type that it was not built to connect to. An onboard transceiver is built into the NIC, with the media connector located at the back of the network adapter. Common onboard transceivers include the BNC connector and RJ-45. An external transceiver is one where the media connection is external. To connect a media device to this transceiver, you must attach an extension cable to the NIC. You need attachment unit interfaces (AUI) to use an external transceiver. The AUI is also referred to as the Digital Intel Xerox (DIX) connector. Each NIC can only work with a specific media and transceiver, depending on its connector. A standard ethernet coax, also referred to as a thicknet, employs a connection method where you connect the external transceiver to the AUI of your NIC. The transceiver connects to the media attached to it through a vampire tap. A vampire tap is a connection where a hole is drilled into the cable without interfering with the central conductor. The vampire tap might be effective, but it has its own challenges. It is largely obsolete today because of the difficulty in positioning the vampire tap properly such that it connects to the conductor without interfering with the surroundings. Other than that, the ethernet coax is also hindered by factors like very high cost and its size. While it might not be common in modern installations, you might come across it in some of the pre-existing installations from time to time.

Wireless access point A wireless access point (AP) enables users on mobile devices to connect to a wired network through radio frequency technology. By design, a wireless access point is basically a wireless switch or hub because it allows you to connect several devices together and create a network. Wireless access points are commonly used to offer internet access to users in public spaces like the airport, hotels, cafés and libraries. Wireless access points are relatively easy to set up. Setting up is

as simple as connecting them to a wired network, turning on power and you are good to go.

Source: https://www.cisco.com/c/en/us/products/wireless/small-business-500-series-wirelessaccess-points/index.html For a small business network, a wireless access point would be perfect because it is affordable, and takes away the challenge of expensive cabling.

Dynamic host configuration protocol server A dynamic host configuration protocol server (DHCP) assigns each host an IP address. It makes work easier for network administrators because instead of static IP addressing, it automatically provides IP information. Static addressing is a situation where you assign each host an IP address manually. DHCP servers are efficient in any networking environment irrespective of the size, and when in use, any hardware can be used as a DHCP server, even your router.

Firewall A firewall is a security guard for your network. This is one of the most important things that you must always make sure is running on any network. Without a firewall, all communication on that network is accessible to anyone who can come across it. Considering that almost all devices in use today are connected to the internet in one way or the other, there are so many intruders who look for unprotected networks. Someone who has unwarranted

access to your network can use it and your devices for anything, including terrorism. They can also prevent you from accessing some important aspects of the internet. Firewalls can either be software firewalls that are installed in a router or server, or you can also have a black box firewall. All firewalls must have at least two network connections. One connects the network (a private side) and the other connects to the internet (a public side). You can also have an additional firewall which connects equipment and servers which might be deemed both private and public. A firewall is an important part of your network because it is your first line of defense especially if your network is connected to the internet.

Intrusion Detection or Prevention System (IDS/IPS) IDS is a security tool deployed to detect any tactics that might be used by hackers to exploit your network. IDS detects network attacks, attacks on your network resources, applications and services. It also detects the presence of trojans, worms and viruses. However, IDS will only identify, detect and report such exploits. To stop the attack in question, you need IPS. An IPS stays vigilant to protect your network from evil exploits. An IPS will monitor traffic to and from your network, search for any form of attack, including malicious code. When identified, IPS drops such compromised data packets, while at the same time allowing you to proceed with proper network use without any interference. While IDS will only identify and report a potential threat, IPS will stop it, drop compromised packets, or shut down the port.

Domain name server (DNS) One of the most critical servers in your network is the DNS server. DNS is also an important part of the internet. Each website address is identified by a unique address, such as http://206.124.115.189 . It is impossible to remember all these digits. However, DNS allows you to enter the website as www.yourname.com . What this means, therefore, is that DNS is your phonebook to the internet. Any device with an assigned IP address has a host name on the internet. This host name is part of what is referred to as a fully qualified domain name (FQDN). Each FQDN has a domain name and a host name. Name resolution is the process through which you find the IP address for whichever host name. A name resolution can be performed in one of many ways, including the DNS. Domains are given a hierarchical structure on the internet, with the following considered some of the top-level domains: .com – for commercial organizations .gov – for government branches in the US .edu – for an educational institution .org – for non-profit organizations .net – for a network institution

.mil – for the US military .int – for an international body like the UN While these are the traditional top-level domains, other domains have come up over the years, and are equally important. These include: .me .biz .post .travel .cc .arts .info

Segmenting networks When managing a large network, over time it becomes apparent that you need to break it into smaller segments that are easier to manage, and are efficient on resource consumption. More often this need arises because you have too many users on the network, slowing it down. Traffic congestion is one of the worst nightmares for any networking expert. Traffic congestion can be caused by any of the following: Low bandwidth, or exceeding your allocated bandwidth Using unnecessary hubs in the network for connectivity Multicasting Broadcasting storms Having too many hosts within the broadcast domain If you encounter any of these challenges, consider splicing the network into smaller segments (network segmentation). You can segment a network by using a network switch or a router. The hubs extend the collision domain from the main switch. Remember that you are still using one network (broadcast domain). It is important to consider breaking up a network domain because whenever a server or host sends a broadcast, each device within the network has to access and process it, unless you are using a router.

Chapter 5 Open Systems Interconnection Model - OSI

The open systems interconnection model forms the foundation of your knowledge in networking. It is composed of seven layers, each of which plays an important role in determining effective and efficient communication between systems. Earlier on, computers could only communicate with other computers from the same manufacturer. This means that if you had an Acer, you were unable to communicate with someone who was using an IBM, for example. OSI was developed late in the 1970s by the International Organization for Standardization (ISO) to help overcome some of the challenges that made communication impossible. The concept of OSI was to make sure that network devices became interoperable. As a result, users on different networks were able to communicate with one another without any challenges. While some of the goals of implementing OSI have been met, some challenges still exist, which can be improved upon over time. Networks, as we know them today, are built around the OSI model. OSI forms the foundational architecture for networks. By understanding how the OSI model works, you have knowledge of how information is transferred from one application through a network to a specific application on the other end. All this is achieved through a layered system.

Layered communication The OSI model is a reference model. A reference model is simply a concept of how you expect communication to happen. This is a blueprint that shows all the procedures and processes that must be met for communication to take place between the components involved. These processes in the OSI are clustered into layers. Any communication network or system that is designed according to this method is referred to as a layered model. Reference models are used by developers to help them understand how computers operate and communicate on a network. They also help developers understand the functions that must be met in each layer for smooth communication to take place. What this means is that if a developer is tasked with working on a protocol for a specific layer, they only need to focus on that layer, and not the others. Everyone is tasked with unique roles in building this network.

Importance of using reference models While there are many reasons why reference models, and in particular the OSI model is useful in communication, the primary objective was to allow interoperability across vendor networks. Other than this, the layered model enjoys commendation for the following reasons: In this model, changes that take place in one layer are restricted to that layer alone.

They cannot affect any other layer. For this reason, application development and the work of developers is much easier because they only need to focus on the layer they are tasked with. A reference model allows different software and hardware to communicate on the network without any encumbrances. By defining the functions and tasks that must take place in each level, layering supports standardization in the networking industry. Since network components are standardized, this reference model allows vendors to focus on developing reliable and efficient network components. The OSI model classifies communication processes in the network into small segments that are easy to manage, making work easier for network administrators and other associated experts during troubleshooting, design and development of unique network components.

OSI reference model The OSI reference model is useful in that it helps you transfer data between dissimilar hosts. In this regard, you can transfer data from a Mac computer to a PC. The OSI model is a set of guidelines that developers use in a network. The OSI model comprises seven layers as follows: Application Presentation Session Transport Network Data link Physical The application, presentation, and session layers determine the process of communication between applications and the end users. The rest of the layers are concerned with data transfer from one end of the model to the other. The upper layer that addresses communication between the user and applications is heavily leveraged on user interface. The bottom layer, on the other hand, is about network addresses. Application This is the first point of contact between users and the device. When the user interfaces with this layer, an expectation of a network access is apparent. A web browser, for example, processes requests by first interacting with the application layer. The application layer, therefore, is the interface of communication between the user and the next layer. Information is passed down from here to process user requests. The web browser in our example, is not a part of the application layer. It is

only an interface that interacts with the protocols in the application layer to request access to resources that the user needs. Another role of the application layer is to ensure the requested communication is identified, and establish its reliability. In this regard, the application layer will determine whether the network has enough resources to meet the needs of the user as keyed in through the interface. Why is this important? Computing demands unique resources, at times more resources than the user is aware. Other than the desktop resources, you might need other components of the network, and from more than one application. The application layer therefore, makes sure that all the necessary components are available. Presentation The presentation layer is about purpose. It sends data back to the application layer. The presentation layer is primarily tasked with formatting code and translating data. It is therefore the translator that stands between the user and the application. It codes and converts requests from the user to a language that the network understands, and sends back feedback to the user in responses that they understand. The average user does not know a thing about computing languages. The presentation layer therefore, converts data to a native format that the user can read, something like ASCII. Other than that, the presentation layer also makes sure that any data that is transferred from the application layer of one system is understood by the application layer of another system to which a response is intended. It is also in the presentation layer that services like encryption and decryption, compression and decompression are carried out. Whenever you use your device to access any media device, you enjoy the viewing or listening experience because of the presentation layer. Session The presentation layer has several sessions running from time to time, passing information from one layer to another. The role of the session layer is to manage these sessions accordingly. Nodes or devices that must communicate to process user requests are managed in the session layer because it provides the relevant dialogue control to support this communication. You have come across communication modes like simplex, full duplex, and half duplex. These modes are organized in the session layer. In the simplest terms, the session layer makes sure that data from each application is separate from data from other applications. Transport The role of the transport layer is self-explanatory. It assembles data and packages it into a stream. The transport layer receives information from the upper layers, compacts it and pushes it along to the next layer in the data stream. This layer checks to make sure there is a stable connection between the destination and originating host, and offers data transport services from one end of the spectrum to the other. It is also tasked with establishing a communication session, and tearing down a virtual circuit once communication

has been completed. The transport layer is built on transparency in data transfer. With this in mind, any information that is network dependent is hidden away from the top layer. It is in this layer that you will come across TCP and UDP. TCP, as you might already know, is a reliable service while UDP cannot be relied upon. Everything that happens in the transport layer comes down to the app or program developer. It is up to them to decide which mode they prefer between TCP or UDP for transport services. Network The role of the network layer is to make sure data can be transferred without a hitch. To do this, it manages addresses on the network, identifies where all the devices are located, and from that, establishes the most feasible way of moving data on the network. This layer ensures that data can be transferred between devices that are not directly connected to one another. Routers play an important role in the network layer. They allow you to network devices together and share data across them. The router interface receives the data packets, then checks to confirm the IP address of the destination device. The router then sends the data packets to the right interface, where it is framed and forwarded to the correct LAN. If for some reason the router cannot find the destination network for the data packets in its routing table, it drops the data packets. The network layer uses two types of packets: Data packets Data packets help in moving data all through the network. Any protocol that supports this is known as a routed protocol. Common routed protocols you might have come across include IPv6 and IP. Route update packets These packets make sure the connected routers are always updated about the networks they are connected to. They update the routers frequently. The role of these packets is to make sure that every router has an updated routing table for every other router on that network. Interface The interface refers to the exit point the data packets will use when dispatched to the destination network. Network address Network addresses are specific to the protocols. Each router must keep the routing table for each routing protocol because these protocols help in identifying the address schemes. You can think of this like a menu but in all the possible languages spoken by everyone who comes to that restaurant. Metrics Metric refers to the distance between the host network and the remote network. Every single protocol has a unique way of determining the distance.

Data link The data link layer allows transmission of data. It is also the layer that handles error notifications. This layer makes sure that messages sent on the network are delivered to the right host device on the network. How does this happen? Messages shared on the network are converted into data frames, tiny bits of information. It also assigns a custom data header which includes the hardware address of the originating device, and the destination. Routers within the network layer are not focused on the location of a host. Instead, their concern is on the location of the networks, and the most efficient way of getting to them. The data link layer identifies all the devices that belong to a network. Remember that the data packets are not interfered with when they are transported. Instead, they are enclosed in the information that allows it to be transported to the relevant media type. Physical The physical layer is responsible for two important functions, sending and receiving bits. This layer determines the appropriate requirements for managing a physical link between two devices at different end points in the communication cycle. The physical layer also has unique physical topologies and connectors, and it is for this reason that disparate systems can communicate with one another.

Chapter 6 Internet Protocol

To understand the internet protocol (IP), we must mirror the earlier discussions we had about TCP/IP. Knowledge of TCP/IP is important to understanding how the internet works. Built by the DoD, TCP/IP has four layers. From its design, TCP/IP is no different than a scaled down version of OSI, which has seven layers. The four TCP/IP layers are: Network access Internet Host-to-host Process/application layer The following table shows the correlation between the DoD model and the OSI model OSI

DoD

Application Presentation Session

Process/Application

Transport

Host-to-host

Network

Internet

Data Link Physical

Network Access

Process/Application Layer The process/application layer defines the requirements for application communication between different nodes on the network, and the specification guidelines for user interfaces. This layer deals with the services and applications that IP networks use, as discussed herein: Telnet The role of this protocol is to emulate terminals, and you will often come across forums where it is referred to as the chameleon protocol. Telnet allows someone using a remote client machine (telnet client) access to resources for a different machine (telnet server). To do this, telnet makes it appear as though the telnet client is a terminal that is connected to the local

network, in the process creating a virtual terminal and allowing interaction with the remote host. One of the challenges of telnet is that it does not offer encryption or security. If you need security features during the remote session, telnet is replaced by secure shell (SSH). File transfer protocol (FTP) To understand how important FTP is to your network, consider the fact that without FTP you would not be able to transfer any files over an IP network. FTP is special because it doubles up as a protocol and a program. The difference lies in utility. When using FTP as a program, it is run by users. On the other hand, as a protocol, FTP is used by applications. Through FTP you can also access files in different repositories and directories. Other than access, you can also move the files from one directory to another. FTP is important for file management between hosts. However, you cannot use FTP to execute a remote file to run as a program. To use FTP you must have access to the necessary authentication details. Secure File Transfer Protocol (SFTP) You might be concerned about the security of a given network, and thus worried about transferring files across it. This concern is mitigated by SFTP. SFTP allows you to transfer files over an encrypted connection. The encryption is performed by SSH. Other than the aspect of encryption, SFTP performs the same role as FTP; access and transfer of files over different computers or IP networks. Trivial File Transfer Protocol (TFTP) TFTP is a very simple yet effective FTP version. It is ideal for people who already know what they are doing. To use TFTP, you probably know the file you are looking for and where it is located. TFTP is very fast, because it is not loaded with as many functions as FTP. You cannot browse a directory through TFTP. It is purely for sending and receiving files over a network. Considering the fact that TFTP is a minimalist version of FTP, you are limited in the size of data blocks that you can transfer across it. Other than that, TFTP does not have any authentication protocols. Therefore, it is a rather insecure protocol, and as a result, not many sites use it. Network File System (NFS) In file sharing, NFS is a godsend. NFS specifically allows interoperability between different file systems. NFS apportions memory on different file systems so that you can still access, store and transfer files. A good example of this is when you connect to a network using a MacBook, but need to access files from someone who is running a Microsoft operating system on their computer. These are two dissimilar systems in that their file systems, security, name of file names, case sensitivity, and so forth are not the same. However, NFS makes it possible for both of you to access the same file with the indigenous parameters of your file system. Simple Mail Transfer Protocol (SMTP) SMTP is responsible for making sure you receive emails as soon as possible. The SMTP server is

constantly running, refreshing and checking for new messages to your address. If any message is detected, it delivers the messages. SMTP works hand in hand with POP3. While SMTP is responsible for sending messages, POP3 receives them. Post Office Protocol (POP) The POP protocol simply acts as a warehouse for any incoming mails to your address. Immediately after you connect to a POP3 server, any messages that had been sent to you are downloaded. The interaction between the client and the POP server ends the moment the messages are downloaded and you can interact with them. Internet Message Access Protocol Version 4 (IMAP4) While POP3 has been effective over the years, standards are shifting, and most people are using IMAP4. One of the reasons why IMAP is considered a better upgrade is because it includes security features to protect you. Through IMAP, you have more control over the way you interact with the mails you receive. Once you receive the email, you can interact with it without necessarily opening the email. IMAP allows you to peek inside the email, or read a part of it (the header). This way, you can choose whether you need to open and read it, or ignore it, or delete it altogether. Users with a very active email address will also find IMAP very useful in that you can categorize the messages. You can sort them into groups, store them in a hierarchical order and so forth. IMAP allows you more control in the way you access your emails. Transport Layer Security ( TLS) TLS is a cryptographic protocol that ensures your data is safe as you browse the internet. TLS, together with Secure Sockets Layer (SSL) ensures that all your internet activity is protected, whether you are sending an email, browsing the internet, or sending a fax message. Secure Shell (SSH) SSH helps you create a secure telnet session over a TCP/IP connection. Through SSH, you can securely sign into systems on the network, run applications remotely, and transfer files between networked systems. SSH is useful in that the connection is encrypted, protecting your internet activity. Hypertext Transfer Protocol (HTTP) HTTP is responsible for everything you do on the internet. HTTP manages all communications between web servers and browsers, to make sure that whenever you click on any link, you are directed to the correct resource irrespective of its or your location. At the moment, a secure and advanced version of HTTP is in use, HTTPS – Secure Hypertext Transfer Protocol. Through HTTPS, your communication between the web server and browser is protected. Secure Copy Protocol (SCP) SCP builds on the flaws of FTP. While FTP allows you to transfer files across the network, it is not a very secure platform. The problem with FTP is that whenever you share files on the network, the user

credentials are also shared. FTP does not have an encryption protocol in place, so your credentials are not safe and can easily be intercepted. SCP uses SSH to protect your file transfers. Before files are transferred, SCP will ascertain that a connection exists between the recipient host and sender. SCP will also maintain the connection until the transfer is completed. Domain Name Service ( DNS) The role of DNS is to resolve the internet names to the respective IP addresses. One of the perks of DNS is that it makes work easier. Look at a scenario where you decide to change your service provider. Since the website would move to a different domain, the IP address would also change. This can make work difficult for everyone else who needs to access your website. You might also forget the IP address. Because of DNS, you can change the IP address as many times as you want, but users will barely ever notice the difference. This is because the address they have memorized and relate to remains the same.

Host-to-host Layer The host-to-host layer defines the protocols under which transmission service levels are set up for different applications. It is through this layer that end-to-end communication is performed, and at the same time, it ensures that the delivery of data is free of errors. Packet sequencing is performed at this layer, alongside maintaining the integrity of data. The main reason why the Host-to-Host layer is important in your network is because it protects the process/application layer from the complexities that might be associated with your network. The Host-to-Host layer receives the data stream and processes the information. There are two important protocols that are responsible for operations in the Host-to-Host layer: Transmission control protocol (TCP) User datagram protocol (UDP) Transmission Control Protocol (TCP) The role of TCP is to collect chunks of information from applications and break the chunks into smaller segments. Each segment is assigned a number and sequence, so that the TCP at the destination can process them in the order in which they were segmented. In such a communication process, the transmitting device will first establish a connection with the recipient. With an active connection, data is transferred, and upon completion, the virtual circuit established is torn down. TCP is a reliable and accurate full duplex protocol. User Datagram Protocol (UDP) UDP is a no-frills version of TCP. It is not resource intensive, but still does a commendable job when it comes to information transfer. UDP is therefore preferable when managing a network that would have otherwise been slowed down by a TCP connection. One of the other instances where UDP comes in handy is when you need to transfer data whose reliability is not in question. If, for example, the authenticity of data was already managed in the

process/application layer, you do not need TCP. The network file system also manages reliability on its own, in the process making TCP redundant. The choice of UDP over TCP, however, rests with the app developer, not the end user who transfers files. Another difference between UDP and TCP is that UDP does not sequence segmented data. The order in which the segments are received at the destination, therefore, does not matter. However, while TCP will follow up and retransmit segments that might not have been received at the destination, UDP does not. It is, therefore, an unreliable protocol. The concept behind UDP is that each application has a unique reliability protocol built into it. Therefore, any information transferred through UDP must be credible. As a developer, therefore, you can make one of two choices; use UDP for very fast transfer, or TCP to ensure data reliability. Besides, UDP also doesn’t check to establish a connection between the sender and recipient. No contact is made with the destination once the data is received. The following table will help you tell apart UDP from TCP based on their inherent features: UDP

TCP

Unreliable

Reliable

Connectionless

Connection-oriented

Not sequenced

Sequenced

Low resource requirements

Resource-intensive

No virtual circuit

Virtual circuit

No acknowledgment

Acknowledges receipt

No flow control

Uses windowing flow control

Port Numbers UDP and TCP require port numbers to enable communication with the process/application layer. The reason for this is because they track all the chatter sent from or to the local host. The port numbers from an originating source are assigned by default by the source host. These numbers can only be in values of 1024 or higher. Port numbers are used by TCP to act as identifiers for the destination and source of the sequenced segments. DNS will use both UDP and TCP. The choice of either will depend on the command the DNS is trying to execute.

Internet Layer The internet layer sets the parameters for logical data packet transmission all over the network. Through this layer, hosts are assigned an IP address. The internet layer is also responsible for routing data packets to different networks. In the DoD model, the internet layer exists for two reasons; creating a networking interface to the

upper layers, and to route data. No other level performs the task of routing data packets. This layer also creates one interface through which the upper layers can be accessed. The internet layer is important because, in its absence, developers would have to write so many versions of each application. It is through this layer that IP and other network access protocols interact and ensure that the applications run as they were written to. The internet layer is made of the following protocols: Internet Protocol – IP Everything about the internet layer is emboldened in IP. All the other internet layer protocols support IP. By design, IP stays aware of all the networks that share a connection online. This is possible because any device on a network has an assigned IP address. IP monitors the destination address for data packets, and determines the best path to send the data packets based on the routing table. IP identifies devices on the network by determining the network to which the device belongs, and its ID. The network where a device belongs is identified by its logical or software address, while the network ID is the hardware address. Any host device on a network must have an IP address, which is its logical ID. Through this address, the network has an easier time routing data packets from the host to the desired destination. Internet Control Message Protocol – ICMP ICMP is employed at the network layer, and serves several uses to support IP functions. IP uses ICMP for messaging services and managing the network. Messages carried through ICMP are transmitted in IP packets. The ICMP packets contain significant information that can be used by the host to determine specific problems affecting the network. In the event that the router is unable to transmit information, ICMP alerts the sender, informing them of the failed transmission. Address Resolution Protocol – ARP The ARP monitors the IP address to identify the hardware address of a host device. Basically what ARP does is to request the host device to respond with its hardware address. To do this, ARP reads the software address and translates it into a hardware address. In an ARP broadcast, the destination hardware address is listed in zeros in the ARP header, to ensure that all the devices that are connected to the network received the ARP broadcast. Reverse Address Resolution Protocol – RARP You might come across a diskless IP machine. It might not be possible to determine the IP address for this machine. However, you can identify the MAC address for this machine. RARP sends out data packets that include the MAC address of the machine, in the process requesting the IP address that is specifically assigned to it. This request is sent to a RARP server, which is a dedicated machine. Proxy Address Resolution Protocol – PARP Host devices on any network should never have more than one default gateway. The reasoning behind this is for contingency. In the event that the default gateway is down, you would have to manually set a default gateway. PARP eliminates this problem by making sure that the host machines can communicate with remote subnets without the need for a default gateway, or routing configuration.

One of the reasons why PARP comes in handy is because you can add it to one router on your network without interfering with the routing tables for any other routers on the same network. While PARP might be useful, it also brings about the problem of increased network traffic. To efficiently manage all the mappings, host devices must have a very large data table. It is easy to consider PARP a protocol when in real sense, it is not. PARP is simply a service that is operated by routers in lieu of any other device on the network. More often than not, routers run PARP on behalf of PCs.

Network Access Layer The network access layer oversees the exchange of data between the network and the host. It also oversees hardware addressing, in the process defining the protocols through which physical data transmission is conducted. The network layer adds headers to the IP address. The address includes a protocol field with information about the origin of the segment, whether it is TCP or UDP. This is important to make sure that the segments are assigned to the right protocols at the transport layer when they arrive at their intended destination. The network layer also uses ARP to identify the destination hardware address and determine whether the data packets can be transmitted on the LAN.

Chapter 7 IP Addressing

To gain a better understanding of TCP/IP, it is imperative that you understand the concept of IP addressing. Every device on a network is assigned an IP address, which is their identifier. You might know the devices by the generic names that you might have labeled on them, but the most important identifier is the IP address. This is the address that the device uses to communicate with all the other devices on the network, and any other connection necessary on the internet. To understand IP addressing better, there are some unique terminologies that you might come across from time to time. These terminologies will form the foundation of the knowledge you gain in IP addressing, and can also help you troubleshoot network problems from time to time. Let’s look at some of the important ones: Broadcast address This is the address that hosts and applications use to communicate with all the other host devices in a given network. There are different types of broadcast addresses such as 255.255.255.255, and 10.0.0.0 Network address A network address is used when a host device needs to send data packets to a remote network. Network addresses include 192.168.0.0 and 172.16.0.0 Bit Bit refers to one digit. It can either be a 0 or a 1 Byte Byte refers to 8 digits. However, depending on how it is used and for parity, there are instances where byte can be used to mean 7 digits.

Addressing Scheme There are 32 bits of information stored in a single IP address. This information can be identified in four segments, each referred to as a byte. IP addresses can be deduced in any of the following ways: As a hexadecimal; FW.19.5T.24 In binary form; 10101001.00011010.00110110.00111100 As a dotted decimal; 192.168.34.25

While hexadecimal identifiers are not used as often as the other two, you might still come across some IP addresses that use hexadecimal identifiers, especially for unique programs and applications. One of the best illustrations of hexadecimal identifiers in use is the Windows registry. A 32-bit IP address is often referred to as a hierarchical address or a structured address. This is different from a non-hierarchical address or a flat address. It is possible to use any of these addressing schemes. However, a hierarchical addressing scheme is preferred because of the fact that it allows as many addresses as 4.3 billion IP addresses. The flat addressing scheme, on the other hand, is often castigated because of challenges in routing. In this regard, all IP addresses are similar. Because of this reason, it would be mandatory for each router on the internet to store the IP addresses of all the machines that connect to the internet. This creates a very big problem, and would make it impossible to enjoy efficient routing. Hierarchical IP addressing, therefore, solves the problem experienced in flat addressing by introducing a tiered addressing scheme that identifies devices connected by the network, host and subnet, or the host and the network the host connects to. Think of the hierarchical addressing scheme like your phone number. Each phone number is segmented into categories identifying the area code, the zone and finally the customer’s unique phone number, which can be customized. Therefore, while flat addressing uses all the 32 bits as a unique identifier for the device, hierarchical addressing uses every part of the IP address to identify different components of the network.

Network addressing Each network is identified by a unique number, the network address, also referred to as the network number. When you look at the IP addresses of all the devices on a network, you will realize that they all have the same network address in their IP address. Take this IP address, for example; 172.16.34.56. The part designated 172.16 refers to the network address. While every device shares the network address, the host address is unique. Every machine on the network has a unique host address. In our example above, the host address is 34.56. Networks are often classed according to the size of the network, in terms of the number of hosts that are connected on the network. A small network that has a lot of hosts is a Class A network, while a large network with very few hosts is a Class C network. The classification of host addresses depends on the class allocation of the network. There are three network classes, A, B, and C as shown in the table below: A B C

8 bits Network Network Network

8 bits Host Network Network

8 bits Host Host Network

8 bits Host Host Host

Class A The characteristic of this network is such that the first byte is designated to the network address, then the rest of the bytes are allocated to the host address. As shown in the table above, this can be illustrated as follows: network.host.host.host If the IP address is 172.16.34.56, 172 refers to the network address, while the rest of the identifiers refer to the host address. You will also notice that all the devices connected to this network have their IP addresses starting with 172. Class B This network assigns the first two bytes to the network address, while the rest identify the host. The connotation is as follows: network.network.host.host Class C This network assigns the first three bytes of the identifier to the network address, leaving only one byte for the host address as follows: network.network.network.host You might also come across special IP addresses, classified under Class D and Class E. These networks are identified as 224 and 255 respectively. Class D addresses fall in the range between 224 and 239, while Class E addresses fall between 240 and 255. Take note that these are special IP addresses, and more often they will be used for research and scientific reasons. They are not very common. However, the range for these IP addresses is between 224.0.0.0 and 239.255.255.255.

Unique IP addresses As you learn about IP addresses, you will come across a range of IP addresses that can never be assigned to any host by the network administrator. These are IP addresses that have a unique reason for their existence. The following is a brief explanation of some of them, and why they are special: The IP addresses is designated to 1s, like 255.255.255.255 – Such an address means that it broadcasts to hosts within the active network. This is also referred to as a limited broadcast, or an all 1s broadcast. The IP address is set to 0s – This addressing scheme is popular with Cisco. Cisco routers are set to default to 0s. By definition, this addressing can be interpreted to mean any network. All host addresses are 1s – This is used to identify all the hosts on a given network. Take the example of 172.16.34.56. This would mean all the hosts on the 172.16 network. It is pretty much a Class B network.

All host addresses are 0s – This calls to a host on a given network. Simply put, it refers to the network address. 127.0.0.1 – This IP address is a special designation for loopback testing. The local host uses this address to send test packets to itself, without creating or using any traffic in the local network.

Private IP addresses Private IP addresses are specifically limited to use within a private network. You cannot route these addresses on the internet. The idea behind such IP addresses is not just about offering unparalleled security, but it is also about saving space. Private IP addresses are assigned to ISPs and some corporations, and some small home networks. The concept here is that some hosts barely need to get assigned a public IP address to access the internet. With a private IP address, they can connect to their networks and communicate with other hosts accordingly. Anyone who needs a private IP address must use a Network Address Translation (NAT). NAT converts a private address in such a way that the user can still access the internet with it. In essence, this means that so many people could be using the same IP address to communicate on the internet, in the process saving up on a lot of space. As a network administrator, how do you determine the best class of IP addresses to use? For a corporate network, it is always advisable to use Class A IP addresses. The reason behind this is that irrespective of the size of the network, a Class A network can be scaled up for growth and flexibility. Therefore, you can add and remove hosts to the network as, and when, necessary. In the case of a home network, it is wise to use a Class C network. Class C networks are ideal for home networks because they are simple, easy to configure and understand. They are also easy to manage. A Class C network allows space for up to 254 hosts, which you will probably never exceed. A Class A network on the other hand, offers space for more than 65,500 networks, each of which can handle 254 hosts.

Internet Protocol Version 6 – IPv6 For the most part, you will come across IPv6 referred to as the internet protocol for the future. One of the reasons behind the creation of IPv6 was to mitigate IPv4 running out of IP addresses. Considering the fact that so many devices are already connected to the internet and have assigned IP addresses, it made sense to prepare for a future where IPv4 was exhausted. One of the features behind IPv6 as a new age protocol is efficiency. This protocol is designed for full optimization and functionality that would allow users to go about their computing and networking needs without any concerns. Each day more devices are built that must connect to a network. This is not a bad idea, considering that we live in an age where networking and connectivity are mandatory in many environments. Theoretically, IPv4 was designed to support up to 4.3 billion addresses. Of course not all of these addresses are in use. Of the 4.3 billion, roughly 200 million addresses can be assigned.

A conspicuous feature missing in IPv4 that is available in IPv6 is IPSec. IPSec offers end-to-end security. At the moment there is a lot of talk about end-to-end encryption. This is to make sure that all communication between the host and recipient is secure. In the age of the Internet of Things, there are so many entities that are keen on intercepting any form of communication. This explains one of the reasons why IPv6 is useful. IPv4 is notorious for broadcasting storms. A broadcasting storm is a situation where you experience too much traffic on a network, crashing it in the process. Another problem with this scenario is that every single device on the network will suffer in the event of a broadcasting storm. The moment a broadcast is sent, each device must analyze the broadcast to determine whether it is meant for the device or not, in the process stopping anything else that the device was doing on the network. To address the broadcast storm problem in IPv4, IPv6 uses multicast traffic. This is done in two ways; unicast or anycast. Unicast is no different from the way it has been implemented on IPv4. However, anycast is different. Communication through anycast is such that an address can be shared on more than one device. The idea behind this is to ensure that if traffic is diverted to any device that shares the address, it is routed to the closest host.

Types of addresses Broadcasts are one of the key definitive features in IPv4. However, considering the fact that broadcasts are responsible for a lot of network inefficiencies, they were eliminated in IPv6. The following are the key methods of communication in IPv6: ● Unicast In this form of communication, all the data packets meant for a unicast address are sent to one interface. Considering the possibility of flooding the interface, the interface address can be shared with more than one device. This will help in balancing the traffic load. ● Link local addresses Link local addresses operate in the same way that IPv4 manages private addresses. These addresses are not designed for routing. The benefit of using link-local addresses is that you can use them to configure a temporary or random LAN whenever necessary, perhaps to host a meeting for some small task that does not need to be routed, but can still have access to local services and files. ● Global unicast addresses Global unicast addresses are routable and public. They operate the same way in IPv6 as they do in IPv4. ● Unique local addresses Unique local addresses are also built for non-routing needs. However, they are designed for global use. Considering the global approach, it is highly unlikely that you will ever come across any unique local address overlapping with another. Unique local addresses are designed to support communication on one site while at the same time allowing you to route the communication to a variety of LANs. ●

Anycast

Anycast addresses will identify several interfaces to which they can communicate. However, they communicate by diverting packets to a single address. The address to which the packets are delivered is often the closest IPv6 address the packets encounter, considering the routing distance. ● Multicast Multicast communication is a situation where data packets are transmitted to different interfaces, each of which is identified according to their multicast address. All multicast addresses in IPv6 must begin with FF.

Chapter 8 Wireless Technologies

Wireless signals operate pretty much in the same way that ethernet hubs do. They support back and forth communication. These signals operate in the same frequency to receive and transmit data packets, hence such wireless technologies are half-duplex. A Wireless LAN uses radio frequencies (RF) that are transmitted from an antenna. Considering how far these signals travel at times, they are prone to vulnerability. There are a lot of factors in the immediate environment that might also be responsible for interfering with the quality of network service delivery. One of the possible ways of improving the network is to increase the transmission power. However, while increasing the transmission power might work, it also creates a new problem, opening up the network to the possibility of distortion. Besides, higher frequencies do not come cheap either. The wireless specification 802.11 was created to support network freedom. Under this specification, you do not need licensing in most jurisdictions to operate a wireless network. Therefore, all devices that support wireless connection can communicate without necessarily having to force the administrator or user to create a complex wireless network. Considering that wireless networks transmit data through radio frequencies, in some areas they are regulated by the same laws that monitor the operation of radio frequencies like AM and FM. The Federal Communications Commission (FCC) in the US oversees the use of wireless devices. In support, the Institute of Electrical and Electronics Engineers (IEEE) establishes the standards upon which the frequencies released by the FCC can be used. For public use, the FCC allows 900 MHz, 2.4GHz and 5GHz. The first two are identified as the Industrial, Scientific and Medical bands (ISM) while the 5GHz band is Unlicensed National Information Infrastructure band (UNII). Before you run a wireless network outside of these three bands, therefore, you must seek approval from the FCC. The 802.11b/g wireless network is one of the most commonly used all over the world today. 802.11 This network was the pioneer of WLAN, standardized at 1Mbps and 2Mbps. 802.11 is operated in the 2.4 GHz frequency. While it is popular, it was not until 802.11b was released that its uptake increased. There are many committees in the 802.11 standard, each of which serves a unique purpose.

Components of a wireless network Wireless networks require fewer components compared to wired networks. Basically, all you need for your wireless network to operate effectively is a wireless NIC and an access point (AP). Once

you understand how these two components work, you can install them easily and operate the network without any challenges. Wireless Network Interface Card (NIC) Each host must have a wireless network interface to connect to a wireless network. The role of a wireless network interface card is no different from that of a normal network interface card. Access Point (AP) You need a central component in the network to enable communication. For a wired network, this would be a switch or a hub. For a wireless network, you need a wireless access point. Most APs have at least two antennas to boost their communication range. They also have a port through which they connect to a wired network. A wireless network must have some cable running through it. Most wireless networks are connected to a wired network through an AP. The AP bridges the two network types. Wireless antenna An antenna in a wireless network serves two roles. It can act as a receiver and a transmitter. At the moment there are two types of antennas you will come across in the market, a directional antenna or an omnidirectional antenna. While directional antennas are point-to-point, omnidirectional antennas are point-to-multipoint. The directional antennas offer a wider range compared to the omnidirectional antennas in the same gain range. The reason for this is because all the power in a directional antenna is focused to one direction. Perhaps the challenge of using a directional antenna is that you must be very accurate when positioning its communication points. For this reason, directional antennas are an ideal choice when setting up point-to-point connections, or bridging different access points. Omnidirectional antennas are popular with APs because more often, clients want to access the network in different directions at any given time. A directional antenna would make this quite a challenge, because the client would have to position their access in one direction to enjoy access.

Setting up a wireless network You can set up a wireless network in one of two ways. You can either use an ad hoc set up, or infrastructure mode. Ad hoc setup For this setup, the devices connected can communicate with one another without having to use an AP. This is what happens when you create an ad hoc wireless network on your laptop to communicate with other devices that can connect to it. As long as you use the right settings, devices that are connected to the network can share files without any issues. When installing the network, one of the prompts will require you to choose whether you are using an ad hoc mode or infrastructure mode.

For this set up to work, make sure that your computers are within 90 meters of one another. Once they can detect one another, you can communicate and share files. The problem with an ad hot network is that it never scales well, and for this reason you should never use this in an organizational set up. This network is also prone to a lot of collisions. One of the reasons why ad hoc networks are no longer appealing today is because the cost of obtaining APs is very affordable, it does not make sense to run an ad hoc network. Infrastructure mode Infrastructure mode allows you to connect to a network, enjoying the benefits of a wired network without the unsightly cables. In this mode, the NIC communicates through an AP instead of directly to whichever device is on the network as is the case in an ad hoc setting. All forms of communication between host devices on this network must pass through the AP. When connected to this network, all the hosts appear to the other devices on the network in the same way they appear on a wired network. Before you connect your client to on this mode, make sure you understand some of the basic concepts, especially security.

Factors to consider when installing a large wireless network When connecting a large wireless network, you must adhere to specific design considerations. A lot of organizations today use mesh infrastructure. One of the reasons for this is because it is decentralized and dependable. Mesh infrastructure is also one of the most affordable setups, so most organizations find it feasible. These networks are affordable because each host only needs to broadcast data packets as far as the nearest host. In such a network, each host in the network is a repeater, so instead of one host struggling to transmit the data all over the network, it carries it to the next host who then passes it on and so forth until the data is transmitted to the intended recipient. A mesh interface, therefore, is a reasonable consideration especially when you are building a network over a difficult topography. Mesh topology is implemented with several fail-safes in the form of redundancy connections between hosts. Since the design basically is built around making sure redundancies are a thing of the past, a mesh topology is perfect for a large institution or installation. Mesh networks are highly reliable. Considering that each host on the network is connected to many other hosts, any one of these hosts dropping out of the network does not affect the system. Perhaps one of the hosts malfunctions or experiences a software challenge. Instead of data hanging, the other hosts on the network simply find an alternative route and continue transmitting the data packets. Anyone on the network will barely notice one of the hosts is missing. Would you employ a mesh network on a home network? It sounds so good in theory, but in application, a mesh network is not ideal for a home network, or for any small organization that operates on a very tight budget. Signal degradation Whenever you are installing a wireless network, one of the things you have to worry about is signal degradation. All 802.11 networks use radio frequencies. With this in mind, the strength of the signal

will be determined by and affected by a lot of factors, most of which you have no control over. A weak network is quite an unreliable network, and anyone who connects to it would be frustrated. The following are some of the reasons why you might have fluctuating wireless network signals: ● Interference Interference from outside will affect your network. As we mentioned earlier, the 802.11 protocol operates between 900MHz and 5GHz range. Given this consideration, there are so many sources that might cause interference as long as they exist within this range. Some of the causes of interference are in your vicinity, and include another wireless network, mobile phones, microwave appliances, and Bluetooth devices. Any device that uses a transmission frequency close to the frequency your 802.11 wireless network uses will interfere with the network. ● Wireless network protocols Which protocol did you use when installing the wireless network? We already know there are different protocols that exist under 802.11. Each of the protocols operate under a specific maximum frequency range. For example, an 802.11b protocol will conflict with an 802.11g protocol. ● Barriers Wherever you have a wireless network, always remember that barriers can affect the ability to transmit data on it. The signal will be weaker if it has to bypass a number of walls to get to the user. A wireless network with a range of around 100 feet might have the range drop to around 20 feet if there are so many walls within the office block. The thickness of the walls also impedes network access. ● Distance This one is pretty obvious. The further away you are from the wireless network, the weaker your signal will be. Most access points today are built with a range of 100 meters. To extend this range, you must use amplifiers.

Chapter 9 Network Management Practices

Nothing comes easy. You must always have a plan for everything. Building a successful network to the point where it is up and running is not an easy task either. You must have a good plan. A good plan is one that considers contingency measures to ensure that in the event of any problem, you can troubleshoot the network. Network management starts at the planning stage. During planning, you map out what the network should do, the goals you hope to achieve and your objectives. A plan allows you something to use for reference purposes especially when things are not working. This is where documentation comes in handy. One of the most important things you must always have in the documentation is a clear statement of the baseline for network performance. From this baseline, you can evaluate performance to determine whether the network is performing at its peak or if you are lagging behind. Having a baseline for performance is useful in troubleshooting networks, because you know the limits of the network resources.

Importance of network documentation For most networking experts, documentation is one of the most arduous processes you will ever complete. Most of the time you believe you already know what the network does, and how to fix it in the event of a problem. However, never underestimate the importance of documentation. The network documentation should be prepared and stored safely. For this, ensure you have an electronic copy of the documentation, which you can access easily and modify where applicable. Other than an electronic copy, keep a hard copy printed and easily accessible. Ensure it is in a location where you can direct someone to obtain and use it in case you are not physically present to troubleshoot the network. Finally, ensure you have another copy of the documentation kept in a storage facility away from the network. This should be an external storage facility such that the documentation can survive in the event that something tragic happens to the building that houses the network. Even with all the computing we are exposed to, it is still important to make sure you keep hard copies for the sake of contingencies. There are three different groups in which network documentation can fit: Baseline documentation Schematics Procedures, policies, and regulations

Baseline documentation A baseline is simply the basic performance level that you expect of the network or service when it is running in the expected environmental features and resource limits. An example of a baseline specification might indicate the number of processors needed to keep the server running at optimum performance, or the amount of data that passes through the server at peak hours. The idea of a baseline is to help you figure out at a glance, whether the network is performing effectively. In the case of a networking environment, baseline documentation will often include the following important elements: Memory Processor capacity Network adapter Storage facility (hard drive) Once you have the network running, it is wise to determine the base performance of all the important services and sectors of the network. You might need to work with averages too. Do not take the first measurements you get from your initial assessment as the norm. Conduct tests on the network at different times, especially to compare peak and off-peak performance times. You want to know how capable the network is to withstand the performance pressure. Knowledge of such information always comes in handy when troubleshooting a network or in the aftermath of a serious issue. It also helps you understand why some devices are operating the way they are, and what can be done to improve their performance. Today you have access to a variety of network monitoring programs that can help you identify and monitor baselines. Developers have gone ahead and included monitoring software in the server operating systems, to help you identify the base performance level. Even after doing all this, you should never rest on your laurels. Always make sure you monitor the network and revisit the baselines regularly, say twice or thrice a year. This might also help you know how fast your systems are depreciating, and take appropriate measures to address the issue.

Schematics Schematics give you a pictorial or diagrammatic explanation of the network. You can follow a process from the originating point to the terminating point, and identify a problem. A good schematic diagram will tell you what happens between points A and B, and why a given procedure cannot take place before another is completed. Schematics will also be useful when you are discussing the prospect of widening your network. You get an artistic view of what the expanded network will look like, and whether it will meet your needs. At times subtle changes in the network result in overloading a given segment, while another consumes too many resources without relevant utility value. You can create schematics from simple sketches to help you map the way forward. However, as the project evolves to an advanced stage, you would have to use special programs to draw a neat,

elaborate, and presentable schematic diagram. Whichever way you go about it, there are three different types of schematics that you can use when building a network: ● Logical network diagram A logical network diagram contains things like addressing schemes, specific configurations, applications, firewalls, and protocols. These are the factors that will combine to make your network logical and efficient. You must ensure you maintain and update the schematics diagrams as often as you do the same to the network. ● Physical network diagram A physical network diagram is a representation of all the paths to ensure the network is running efficiently. This diagram basically identifies the hardware elements of the network. It shows how the pieces come together to form a complete network. To create a good physical network diagram, assume you need to build the network afresh. What devices will you need? How will they be assembled? A good physical network diagram should address this. You should also consider any hardware or software upgrades in the network, and how they affect the setup. All this must be documented. In a situation where you are unable to draw anything, ensure your plan lists all the network devices. If you ever change anything in the network, make sure you follow through in the network diagram and make the same changes. ● Wiring schematics In as much as everyone is moving towards wireless connectivity, wired connections are still an important part of networking. Wired networking forms the backbone of all connections. Wiring schematics are useful, especially for troubleshooting. Color codes usually confuse a lot of people, especially in a network that you did not build from scratch. With the schematics, someone else can understand your connections, and solve the problem quickly. Another important reason why a wiring schematic is important is because in any network, each wire must be plugged into something. You should never have wires dangling or hanging without terminating somewhere. It saves your time when you know where each wire terminates, whether it should be the wall, a workstation, a hub, router or switch.

Procedures, policies, and regulations Procedures, policies, and regulations are simply guidelines on how to run and manage the network. It comes down to following set guidelines. Adhering to procedures, policies, and regulations comes down to personal convictions. Procedures give a clear description of the necessary steps you should follow in the event of something happening. Procedures tell you how to execute a policy. Say someone has been fired from the organization. In your role as a network administrator, the procedures elaborate on how to remove their privileged access credentials from the system because they are no longer friendly to the network. Remember that the human interface is usually the weakest link in the strongest and most secured

network. Some of the actions that are governed by procedures in most organizations include what to do in the following scenarios: A system audit Action plan in the event of an emergency Cause of action if the server crashes How to address issues arising to management How to assist someone who cannot access their accounts Policies set the guidelines on how the network will operate, considering its configuration. Policies also create rules on how the network users should operate on it. Policies determine things like resource allocation on the network and network privileges. Policies provide a guideline on how to do things. The following are some common scenarios where policies must guide your actions: Individuals who have access to the network and network resources How network resources are used Responsible use of company equipment Security protocols in place Frequency of backups Procedures and policies are often backed by top management. The reason for this is because without their support, consequences for breaching procedures and policies might never be applied, much to the detriment of the organization. Regulations refer to the rules that are set in the company or a governing agency like a government ministry. Regulations are rigid by design. You either follow them or don’t. The consequences for not following regulations are dire, and depending on the governing body, could include jail time, losing your operating license, and so forth. Regulations in networking and IT are guided by a code colloquially referred to as the CIA, which stands for confidentiality, integrity, and availability. About confidentiality, data should only ever be accessed by those who are authorized to access it. On integrity, any data must be complete and accurate at the time you access it. On availability, those who are authorized to access the data must have access to it when they need it. Information security is governed by many regulations. However, one of the most popular is the ISO/IEC 27002. This is an information security standard that was formerly referred to as ISO 17799. ISO/IEC 27002 is the brainchild of the International Organization for Standardization, and the International Electrotechnical Commission.

What you must always remember is that knowledge of the procedures, policies, and regulations for your organization and the industry in which you are licensed to practice networking is important. Compliance is mandatory, lest you find yourself in jail for forwarding an email to the wrong person.

Performance optimization By now you understand why it is important to document every aspect of the network design to make sure it works according to plan. Once the network is running, you must monitor it and optimize it for peak performance in line with the baseline schematics. One of the biggest mistakes that people make in networking is to assume that their networks are perfect. Every network will suffer a flaw at some point. Preparing for these flaws is what will save you time and resources. The best thing about monitoring your network is that you get to understand it better, and can optimize it and improve the performance.

Monitoring your network There are several ways of keeping a close eye on the network. For most people, attention is given to resources like the network bandwidth. There are a lot of tools currently available in the market for this. You also need to understand the health status of your network. You can determine this through the performance logs that are stored in the operating system. Performance logs help you determine the issues with your network, applications or services which might not be running as they should, and anything else that affects the network. Most applications and programs today are built with event logs. These logs show you important information about the events and processes running on the network. If you are running a Windows server, the logs provide a lot of information, including the following: System – Includes all events from Windows system components, like services and drivers. Security – Includes information on sign-in attempts, whether they succeeded or not, and any possible security concern. Application – This is about the events listed by individual applications and programs. These logs are created by the developers of the said applications or programs.

Importance of optimizing network performance Networks are important for so many reasons, one of the most important of which is communication. An optimized network enhances efficient communication. Today you need to ensure communication over your network is reliable. For this to happen, the network must be properly optimized for peak performance. Optimizing the network means monitoring to identify flaws and address them. It involves a host of activities, including killing some processes on the server, sharing the server load with other devices on the network, installing the latest version of a program, or upgrading the hardware to the most

recent model. There are so many reasons why you must always strive to ensure the network is running smoothly. Here are some of the most important: ● Uptime Uptime refers to the duration of time the network is operational, and can be accessed by the relevant users. In principle, you should always strive to ensure more uptime. It might not be easy to achieve, but strive to ensure 99.99% uptime. ● Resource-intensive applications Some of the biggest challenges on any network are applications and programs that hog resources. Such applications are problematic for all other users. This explains why in organizations, network administrators go out of their way to banish and block torrent applications. Applications that consume a lot of bandwidth inconvenience everyone else on the network. Some of the notorious culprits for this problem include video applications and VoIP communication. Unless you have high-speed internet access, running these services on the network will always mean everyone else has to share the little bandwidth that remains. ● Latency Latency is a situation where your device hangs because the resources needed to perform the tasks you need it to are unavailable, or insufficient. Latency is the duration difference between the time you make a data request and the moment the request is delivered.

Procedures for optimizing network performance When it comes to networking, bandwidth is one of the biggest resource challenges that network administrators have to handle. While it would be amazing working on a network that enjoys surplus bandwidth, this rarely happens. More often you are confronted with the reality of having to apportion bandwidth accordingly, and to police network users so you can limit bandwidth for specific roles, and probably throw those who abuse their bandwidth out of the network altogether. Your role is to make sure that as much of the allocated bandwidth is available as possible, to meet the core needs of the network users as per the procedures, policies, and regulations within which your organization operates. The following are some procedures that you will learn to help you optimize the network accordingly: ● Traffic shaping Traffic shaping is one of the effective ways of managing the bandwidth on your network. What this does is that you set parameters for data packets. In these parameters, you will give priority to applications that meet the set criteria, and have their data packets prioritized over other data packets. It is basically about controlling traffic like police officers do. You slow down traffic for some applications so that you can decongest the network, and allow core functions to be performed. In the process, you clear the network backlog faster, and everyone else resumes normal operations. Bandwidth throttling is the technique behind traffic shaping. To do this, you make sure that some applications are unable to transmit data packets beyond a certain limit for a set duration of time.

● Quality of Service Under Quality of Service (QoS), your role is to make sure that the resources available are utilized accordingly, and everyone on the network is able to enjoy the appropriate service quality without a hitch. To achieve the appropriate QoS, you should assign priority hierarchies for different network users according to their needs. In each network you have users whose roles are core to the functions of the organization, and they need more bandwidth compared to those who need internet for basic services. By understanding the bandwidth requirements for each category of users, you can assign them network resources to meet their specific role requirements. There are several ways of going about QoS. Each of these methods addresses some of the common problems that users have with organization bandwidth resources. The concerns you will address are as follows: 1. Out of order delivery – This problem arises when data packets use different paths to arrive at the intended destination in the network. The application on the terminating end is tasked with rearranging the packets in the correct order to deliver the intended message. The problem arises when the network experiences delays in rearranging the packets in the correct order, or if the packets are delivered out of order. 2. Jitters – Since each data packet might take a different route to be delivered to the intended recipient, it is possible that some packets might go through a network connection that is either too busy or relatively slower. It is this variation in the delay that is referred to as jitter. Jitters pose a threat to real-time communication, especially over urgent matters. 3. Delay – There is always the likelihood that the data packets take a longer route to arrive at the destination, or they select a path that is already congested. Such delays often affect applications that are heavy on bandwidth usage, like VoIP. 4. Error – Errors happen when the data packets are interfered with while they are in transit. As a result of the interference, the packets are received in an unusable format. The recipient must then request the data to be retransmitted, which is a waste of time. 5. Dropped packets – Routers on the network might have to reject some data packets, especially when their buffers are stretched to capacity. On the other end, the recipient will be kept waiting for the data, which has to be retransmitted. Through QoS, you can ensure that applications on the server are apportioned bandwidth according to their bit rate, so they work efficiently without delays. In case you manage a network that has surplus bandwidth, these are issues you might never have to worry about. On the other hand, if your network is limited, you must understand how to address each of these scenarios. ● High availability High availability is an approach where you try to reduce the likelihood of downtime. In this process, you offer a guarantee that the network will enjoy a specific duration of uptime within a set time. High availability comes in handy when you manage the network for an organization that deals in important

functions, like banking. You can also work with this when your organization is planning on something important, like perhaps live streaming an event. You make sure that through the duration of the event, you cannot have downtime. ● Load balancing Load balancing is simply sharing the burden across the network. You analyze the load on the network and apportion it accordingly all over the network, so that the resources are shared equally and the burden is easier than it would have been if it were handled by a single entity on the network. ● Fault tolerance Fault tolerance is having backup plans in a way that if any of the key elements of the network goes down, you will not lose access to the resources associated with that element. The easiest way to implement fault tolerance is to have several devices on the network that perform the service you are safeguarding. This way if any of them goes down, you still have the others to maintain network access while you try to get the other one sorted. The easiest way to understand fault tolerance is like having separate hard drives on the network. Each of these drives is a clone of the main one. Therefore, if the main one has a problem, users can still access data from the mirror hard drives. ● Caching engine The concept of caching is to have a dataset that is a duplication of all the important pieces of the original data. Caches help you access everything faster. They speed up the load time because the device has knowledge of your usage patterns. A caching engine on your network is a unique database that keeps all the information that network users need to enable them to access their services online faster.

Chapter 10 Network Standards and Protocols

Knowledge of network standards and protocols will always come in handy when troubleshooting network problems at any given time, irrespective of the network environment. A network protocol refers to the language in use by different systems that intend to communicate with each other. Systems must use the same protocol or language in order to communicate effectively. A simple way of understanding this is through the language barrier you experience when speaking to someone who does not understand your language, and you do not understand theirs. It can be very frustrating. At times you are talking about the same thing, but because none of you understands the other, it becomes a problem. In networking, the first step when troubleshooting problems is often to ensure that the communicating systems are all using the same protocol. If this is not the case, you will have problems. The following are the main protocols in use networking today: NetBEUI IPX/SPX TCP/IP AppleTalk Understanding these protocols will help you solve most networking challenges and ensure the devices on the network are communicating as they should.

NetBEUI NetBEUI refers to NetBIOS Extended User Interface. This protocol was designed by IBM, and is common in the earlier versions of DOS and Windows. Microsoft was one of the first adopters of this protocol. NetBEUI is very common in small networks. The reasoning behind this is because NetBEUI is nonroutable. A non-routable protocol refers to a situation where data is sent through the protocol, but the data cannot bypass the router to interact with any other network. Therefore, networks that use the NetBEUI protocol have the communication confined within the local LAN. The non-routable feature of NetBEUI is one of the reasons why it is barely used today. Since packets cannot be routed on the NetBEUI, it is a simple and highly efficient protocol. It is one of the easiest protocols to configure and install in your network. More often, all you need is the name of the device and you are good to go. NetBIOS mentioned above supports NetBEUI to enable connected devices to communicate on the

network. NetBIOS (Network Basic Input/Output System) is the API (application programming interface) used when making network calls to remote systems. The NetBIOS protocol is included when setting up NetBEUI. NetBEUI needs NetBIOS to manage sessions and their functionality. NetBIOS is also non-routable. However, it is possible to install it alongside the other protocols like TCP/IP to ensure that traffic can be shared across different networks. The following are the key communication modes for NetBIOS: ● Datagram Datagram mode is applicable in a situation where communication is needed, but without a connection or logging a session. NetBIOS also uses datagram mode for broadcasts. Unfortunately, Datagram offers no support for detecting or correcting errors. This is, therefore, the prerogative of the communicating application that uses NetBIOS. ● Session Session mode is necessary in a communication scenario that demands a connection, especially where NetBIOS is needed to establish the session with the communicating system. NetBIOS in this case, will also identify any errors in transmission, and at the same time retransmit any missing or corrupt data as a result of the errors. NetBIOS is not a transport protocol. Because of this reason, you cannot use it for routing. However, it depends on any of the transport protocols IPX/SPX, NetBEUI or TCP/IP for routing. To identify systems within the network, NetBIOS uses computer names (NetBIOS names). Such names cannot exceed 16 bytes for the name, and 1 byte for the NetBIOS suffix. For effective communication on the LAN, each NetBIOS computer name has to be unique.

IPX/SPX IPX/SPX (Internetwork Packet Exchange/Sequenced Packet Exchange ) is a unique protocol. It is a protocol suite. A protocol suite means that it contains more than one protocol. It is one of the most popular protocols in use with the earlier model NetWare networks. Current NetWare networks (5.0 and above) no longer use IPX/SPX. Most networks currently prefer TCP/IP. For the sake of awareness, you might come across IPX/SPX referred to as NetWare Link (NWLink). The role of IPX in this protocol is to route information across a given network. Unlike NetBEUI, IPX is routable. For this reason, the base addressing scheme has to identify every system on your network, and the network the system is running on. An administrator must first assign a network ID to each network. IPX identities have 8-character hexadecimal values, like (0SETQUID.). An authentic IPX address must have a network ID and a period (dot), followed by a 6-byte network card MAC address. The MAC address is a unique identification created for every network card. An example of an IPX address is 00-85-4G-8H-C2-25. Considering the above explanation, the computer described above was connected to the network ID mentioned, the IPX network address will be 0SETQUID.00854G8HC225. Since a MAC address is already part of the address, you do not have to resolve it to communicate on the network. Therefore, communication on an IPX/SPX network is faster and more efficient compared to TCP/IP. TCP/IP does not resolve an IP address to your MAC address. One of the challenges you will experience with

an IPX/SPX protocol is configuration. It is not as easy to configure as we had seen with NetBEUI. You must know about network numbers and frame types to configure this network. What are these? A network number refers to the number that a Novell network segment is assigned. Network numbers have hexadecimal values, and contain no more than eight digits. The frame type refers to the packet used by the network. You must ensure that all the systems running or connected to the network are configured to use the same frame type. Assuming that you are connecting devices to SERVER3, which has an 802.2 frame type, you must set all the frame types to 802.2, otherwise none of the devices will communicate with SERVER3. Speaking of frame types, there are four main types: ETHERNET_II ETHERNET_SNAP 802.2 802.3 In case you are using a Microsoft device, the operating system is set to detect the frame type by default, allowing the operating system to automatically complete an IPX/SPX configuration. This makes work easier for you. On the same note, if you are working on a network whose member devices have different frame types, all the devices that are configured to automatically complete an IPX/SPX configuration will default to 802.2. IPX might be useful in routing packets, but one of its challenges other than being connectionless is unreliability. IPX is unreliable because it allows clients to send data packets to destinations, without the destination acknowledging receipt of the said packets. A connectionless protocol means that IPX does not log any session between the communicating clients before data transmission. SPX mitigates the challenges raised about IPX. The SPX protocol ensures the packets are delivered. SPX is a protocol that is built around establishing connections. The role of SPX is to ensure that any packets that are yet to be received at the destination are resent.

AppleTalk AppleTalk is a routable protocol that was built for Macintosh network environments. AppleTalk connects several systems to communicate in the network. Since introduction, AppleTalk has been implemented in two phases (Phase 1 and Phase 2 ). Most modern devices today use Phase 2. ● Phase 1 The design of Phase 1 was specifically for very small network environments. Therefore, it is restricted to small workgroups, and only supports a few nodes on the network. ● Phase 2 Phase 2 on the other hand, is built for a large network that can handle at least 200 hosts. Using Phase

2, you have support for extended networks, which means that you can assign a network segment to different network numbers.

TCP/IP TCP/IP stands for Transmission Control Protocol/Internet Protocol . This is the most popular protocol in use today. TCP/IP is also a routable protocol. It is very popular because it is also the foundation protocol for the internet. By design, TCP/IP can support networks and network environments that are not stable. TCP/IP was built for use by the US Department of Defense (DOD), and Defense Advanced Research Projects Agency (DARPA). Through TCP/IP, the relevant defense units were able to link systems all over the country that were not similar. To do this, TCP/IP is built with the capability to reroute packets. TCP/IP has so far proven to be a very capable protocol. One of the reasons for this is because it can be used to connect dissimilar network environments. This explains why it became the foundation for the internet. However, this does not mean that it lacks flaws. For all the benefits of TCP/IP, configuration and security are two of the major challenges that this protocol suffers. You need in-depth knowledge of IP addresses, default gateways, and subnet masks to configure and administer a TCP/IP network. Once you familiarize yourself with these elements, you will breeze through TCP/IP networking. In terms of security, TCP/IP features an open design. This has opened it to security breaches, making it one of the most insecure protocols. To protect your network you must go out of your way and implement special technologies to guard your network, the systems connected to it and the network traffic.

Routing protocols While discussing the main protocols, we have come across the terms non-routable and routable protocols. We have seen that TCP/IP, IPX/SPX and AppleTalk are routable protocols while NetBEUI is a non-routable protocol. What is the difference between these two types of protocols? Routable is used to refer to a network where data packets from your network must pass through a router to be delivered to a different and remote network. Non-routable protocol, on the other hand, is a situation where the protocol cannot allow the transferability of packets from one network to another. The reason behind this is because the protocol is built to be simple, and as such lacks the capability to acknowledge multiple networks. An example of this is when the NetBEUI protocol uses NetBIOS naming to pass data across the network. However, NetBIOS names cannot be used to determine the network where the destination client belongs. On the other hand, an IPX/SPX or TCP/IP protocols include a network ID in their addresses, which can identify the destination network.

Command Line Tools Since most people are using TCP/IP today, you should learn a few tricks to help you test your network for connectivity issues. For networking, there are many utilities that you can use to verify the

TCP/IP function on your network or devices. Traceroute (Tracert) Ever sat down and wondered where the data packets go when you are browsing online? How do they get from your device to the destination? The answers to these questions are found in traceroute. The output of this command line shows you all the routers the TCP/IP packet goes through before it gets to the destination. Tracert uses time-outs, time to live (TTL) and ICMP – internet control message protocol error messages to display this information. You can also use tracert to determine the router that might be causing you problems while troubleshooting a network issue. Use tracert as a command prompt by entering tracert, space, the IP address or DNS name of the host. This command sends back a list of all the IP addresses and DNS names the packet travels through to reach the destination. Tracert will also reveal the duration it takes for data packets to be transmitted in each step, through TTL. Tracert is useful when diagnosing a problem where a client is unable to reach a web server on the internet. It helps you determine whether the WAN link is working or if the server has malfunctioned. In some cases, the command line might return an asterisk instead of the expected results. What this means is that the link you are investigating is too slow, or the router in question is very busy at that moment. It is also possible that the network administrator on that specific router might have disabled their ICMP protocol. Ipconfig (ifconfig) This command shows you the current TCP/IP configurations on the workstation. ipconfig returns a list of the default gateway, DNS configuration and IP address among other useful information. Since IPv6 is enabled in modern devices, ipconfig will also show this information. When you have changed your network, it helps to know the new IP address your device is using. To do this, type ipconfig /renew . Address Resolution Protocol (ARP) ARP helps translate TCP/IP addresses to MAC addresses when you are running a broadcast. You use ARP when connected to an Ethernet network to determine the machine on the network that is using a specific IP address. When the requesting device receives the IP address, it adds it to its ARP table for future reference. ping Ping is a basic utility that allows you to determine whether you can connect to a host, or whether the host is responding. The ping syntax is as follows: Ping IP address or hostname There are so many options that you can use with the ping command, which will reveal different

features about the workstation or the host. nslookup The nslookup utility helps you query a name server to identify the IP addresses to which specific names resolve. This is ideal especially when you are configuring a new workstation or server to access the internet. The nslookup command helps you find out the unique features of a given domain name, and the servers that it connects to. It also shows the configuration of these servers.

Chapter 11 Mitigating Network Threats

There is nothing wrong with being paranoid about your network, or network security. In fact, paranoia might actually save you. Today networks are under threat from so many elements. There are people who just need access to your network resources because they can for fun, but others will want to steal important information. There is so much damage that someone can do to you when they gain access to your network. Network security is something that most people barely take seriously, yet it might actually be the difference between life and death. A security breach can cripple the entire network, rendering you sitting ducks. To be very honest, network security is as important as a basic need, and you should do your best to make sure you keep a clean, secure network. There are many security threats that you will learn about. Some have been around for years, others might come up in the future. Some threats simply evolve and become stronger, worse iterations of their former selves. The secret to maintaining your network integrity in terms of security lies in mitigating risks. Disaster preparedness is something you should always work on. Even the strongest network is always at risk of a security threat.

Identifying threats The dangers to your network are insane. There are so many risks out there that you would be foolhardy to ignore them, or assume that you have the most protected network. One tactic that works for a lot of organizations is to work with experts to try hacking the network from time to time, to gauge the level of security, and determine whether you need to do more. Most people only recognize malware and viruses as the biggest risk in as far as network security is concerned. Indeed, an infection will ground you. However, there are so many other risks out there that can have just as damning an effect on your network. Anyone who tries to gain unauthorized access to your network usually does so either for reconnaissance, or for destructive reasons. Some security breaches are so carefully crafted they can accomplish both. Security breaches are important because there is so much damage that can be done with the information accessed from your network. Many people are currently paying the price for using an insecure network. Their private and personal data was stolen, and some of them were impersonated. You might end up being linked to a terror attack you had no idea about, because someone stole your identity and used it for such. Enough about the scary details though. Here are some of the common network security concerns that you will encounter from time to time as a network administrator:

● Denial of service Denial of service attacks (DoS) block you from accessing the network or resources associated with the network. It can be done in so many ways. DoS attacks are common today, and they usually target large corporations. The attackers can flood the organization website with too much traffic that the servers cannot handle, making it impossible for legitimate users to access the website. ● Ping of death The ping of death is one of the common forms of DoS attacks. As you learned earlier, ping is a command line used to determine whether your device is communicating with IP requests and responding to them accordingly. So what happens in the ping of death attack? To communicate, your device sends ICMP packets to a remote host to establish its availability. However, in a ping of death attack, the intruder will flood the remote host with ICMP packets. When this happens, they probably expect the remote host to be overwhelmed, and in the process it either hangs or keeps rebooting. Luckily today a lot of operating systems are designed with patches and security updates to protect them from such an attack. ● Smurfs Forget about the tiny adorable blue fellows you have seen on TV. Smurf attacks are nowhere close to adorable. What smurf attacks do is that they keep flooding your network with spoofed ping broadcasts. Spoofing is using someone else’s IP address. How is a smurf attack executed? The attacker will spoof your IP address, and after that, direct a humongous number of pings to the broadcast addresses associated with your IP address. The router that receives the request will proceed to broadcast the pings on the network, assuming it is a normal broadcast request. From here, all the other hosts will pick up on the broadcast. Since they are all echoing a response to the IP address request, they create an echo. In a short while, you will have a nightmare on the network because each of the hosts on that network is trying to respond to the request. Smurf attacks are more effective when they target a large network. They benefit from the economies of scale. Smurf attacks might be a thing of the past, but you can never rule out the possibility of occurring. There is always that one random hacker who decides to take an old school approach that no one would suspect. Most routers today are programmed in a way that they cannot broadcast data packets haphazardly. ● Tribe flood network Tribe flood network attacks are very complex, and are commonly known as distributed denial of service attacks, DDoS. They are orchestrated when the attacker launches a series of DoS attacks from several sources, and all the attacks are directed towards a host of devices. ● SYN flood attack This is an attack where your server or device is flooded with a lot of requests that are meaningless. When attacked, your device struggles to process a lot of requests, yet they hold no value. An attacker initiates this process by sending a lot of SYN packets to your network. When the requests are delivered to your device, it attempts to respond to all of them. Within a short

while, your network resources are depleted or stretched to capacity. At this point, any incoming requests will be rejected because your network is struggling to deal with a flood of SYN requests. ● Virus The funny thing about viral attacks is that they usually get a lot of media attention. A lot of people fall victim to these attacks before they realize they are affected. A virus is a simple program whose effect depends on the intention of the developer who coded it. It is not always easy to determine the motivation behind a virus, but it is wise to assume complete devastation. After all, anyone who gains unwarranted access to your devices is definitely up to no good. Some of the common reactions that have been reported of viral infections include devices going on a rebooting loop, wiping your entire hard drive clean, deleting some files, sending meaningless messages and emails to everyone in your contact list and so forth. An interesting thing about viral infections is that they never replicate on their own. They depend on the user to do something to execute them. Something as simple as downloading a photo online could have dire ramifications on your network, if the photo has a virus hidden in it. Most viral infections target places where people frequent, like social networks. Once someone has it in their device, it spreads like wildfire. File viruses are the most common, and there is a likelihood that you might have fallen victim at some point. These are viruses that are hidden in files that you share all the time. Since you trust the individual who shares the files, you will barely suspect a thing. Once you open the file, the virus executes and your problems begin. Remember when we mentioned that viruses are often planned around places that people frequent? Well, most of them are created to hide in applications that you use all the time, like a spreadsheet or MS Word document. You might have enjoyed a presentation and asked the presenter to share the PowerPoint file with you for future reference. If the file was infected, you become a victim. A file virus is written to affect an application or system file that is executable. For those who run Microsoft Windows operating systems, file viruses have extensions like .exe or .dll . Chances are high that you will ignore these files assuming that they are system files. When you access the infected file, the virus is executed. The virus will load to the system memory, waiting for you to load a new application or program. The moment that happens, it infects that program and before you know it, you cannot do anything on the network. A macro virus is a script that performs the intended hack without your knowledge. Unlike file viruses, you do not need to execute anything to initiate this virus. Macro viruses are common because they are some of the easiest scripts to write. Most of them are harmless, but you can never take chances. A boot sector virus is one of the worst kinds of viruses that can affect your devices. This virus embeds itself into the master boot record. When the master boot record of your device is compromised, there is very little that you can do other than wiping out the entire hard drive and reinstalling the operating system. This virus overwrites into the boot sector, preventing the operating system from identifying the boot sector or boot order. If you turn on your computer but it cannot boot, citing an absent hard drive, or an

operating system not detected, chances are high that you might have a boot sector virus infection. A multipartite virus affects both your files and the boot sector. This is one of the most dangerous infections, and you will almost certainly have to wipe out the hard drive. Some of these infections are so nefarious, they can stay for months without detection from the point of infection, and cause havoc later on when you least expect it. Today a lot of companies have measures in place to prevent their networks and devices from such attacks. However, you can take additional measures to protect yourself. Install and constantly update your antivirus programs. Avoid websites and networks that are risky and notorious with viral infections. If you like to read tabloid news, you will constantly be a victim of viral infections. ● Worms Worms operate like viruses. The problem with worms is that once you are infected, they spread randomly. You do not need to do anything. As long as worms are in your system, they can do as they please. Worms autonomously activate, operate, and destruct.

How attacks happen Depending on their ultimate goal, each attacker will often have an action plan to get what they want. Some attackers will lure you over a duration of time, in the process learning as much as they can about you and your network, even without your knowledge before they pounce and execute their attack. Attackers can interact with you in a very subtle form and you will rarely realize you are being targeted. Something as simple as a computer game is sufficient to allow them access to your network. Armed with the knowledge that someone out there is always trying to gain privileged access to your network, you must exercise caution. A directed attack is an attack that is orchestrated by an attacker. This is someone who was looking for something specific, or who had a very specific reason for hacking your network. A directed attack is different from things like viruses, for example. A virus might be transmitted from one device to another by unwitting users. Viruses usually take advantage of a weak system and embed into the device. Some viruses clone and look like files you see or access all the time, so you cannot suspect them at all. Here are some of the most common network attacks that you might experience while managing a network: ● Active X attack These attacks are embedded onto small apps that you must install on your computer to allow you access to something specific. Some websites require you to install Java or Adobe plugins to play some media. These are the simple ways the attackers access your devices. Once you install the app, the program runs in the background without your knowledge, collecting information that is transmitted to the hacker’s server. Hackers who use this technique can remotely access everything in your devices without your knowledge. This is a dangerous attack because someone who can see everything in your hard drive might as well store damning information or plant doctored evidence in your device.

● Auto rooter An auto rooter is an automated hack. Hackers who perform this rely on rootkits. This type of attack is common with hackers who want to spy on your network. Once affected, they have access to everything you do, and can monitor your device for as long as they need to. ● IP spoofing An IP spoof is a situation where the attacker sends data packets, but instead of using their real source address, they use a fake one. More often your network will be susceptible because the IP address is spoofed to look like it is coming from a device within the network, when in a real sense the packets are originating from an alien IP address. The problem with IP spoofing is that routers will identify the packets and treat them as normal requests within the network, because they can identify the IP address. The best way to deal with such an attack is to have a firewall in place. There is a lot of privileged information on the internet today. Think about the number of people who shop online, the companies that store user credentials and information on the cloud and so forth. Data is the new gold, and everyone is trying to hack into some system to obtain useful data. Corporate espionage, for example, is one of the biggest black hat businesses. People are paid too much money to hack into a system and obtain specific data. Implementing firewalls on the networks is one of the best ways to avoid this problem. ● Backdoor access A backdoor is a path created by a developer to allow them access to the program or app without going through the normal processes. Most developers create backdoors so that they can access these apps for different reasons. Some use this as a means of troubleshooting the app if normal processes fail and they cannot access them. For hackers, backdoors allow them to invade the network. It is always advisable that you monitor and inspect the network as often as possible to detect backdoors. You should also conduct system audits frequently to make sure your network security is up to par with the present standards in network security. Considering the current state of affairs in data security, most countries place a lot of emphasis on the need for system and network audits. You must do everything in your power to protect the networks you use, or you might be held liable if a data breach ends up with an inappropriate use of the data obtained. ● Application layer attack An application layer attack exploits loopholes in some of the programs or applications that you use. Most of these attacks target apps and programs that require permissions, because they collect a lot of privileged information. Anyone who hacks into your system with such an attack will not just gain access to the system and devices, they will have access to an information goldmine. ● Packet sniffing Packet sniffers are tools that network managers use to troubleshoot the network. They scour the network for problems. However, these same tools can also be used by hackers. Packet sniffers are

commonly used by identity fraudsters to steal login credentials in the breached networks, and any other information that might be relevant to their cause. ● Brute force In a brute force attack, the hacker runs a program on your network that logs into your server, for example. They gain account privileges and from here, use them for backdoor access. Later on they can access your network without needing passwords. ● Network reconnaissance Network reconnaissance is simply spying on the network. The hackers take their time to obtain as much information as they can about your network before they pounce. They can scan network ports, or employ phishing techniques to get the information they need. ● Password attack A password attack is an attack where the hacker pretends to be a valid user on the network, allowing them access to your resources and credentials. Password attacks can be initiated in so many ways. They can also be used alongside other attacks. ● Man in the middle Man in the middle attacks take place when the hacker intercepts your communication. They read the data intercepted before it is delivered to the recipient. Internet service providers, credit and debit card swipe machines, and rogue ATM machine operators are commonly used for these attacks. ● Trust exploits It has often been said that the weakest link in any security apparatus is the human interface. We tend to hold trust in high regard, and this is what brings down the entire system. You believe that someone cannot do something bad because you know them personally and they are not of that character. Unknown to you, they might have ulterior motives, or someone else might be using them without their knowledge. Trust exploits occur when a hacker manages to exploit the trust relationships that you have in the network. ● Port redirection In this attack, the hacker uses a host machine that is already compromised. The compromised host machine is one that is friendly to your network. Since this machine already has privileges, the hacker uses it to generate traffic to the network, traffic packets that would normally be blocked by your firewall from getting into the network. ● Phishing Phishing is one of the most sophisticated hacking attempts you can come across today. There is a good reason why it is referred to as social engineering. As we evolve with the networks, so do the tactics used to obtain privileged information by hackers. Phishing attacks require you to offer information to the hacker that you would not offer in your right mind, but you do so without knowing it. Majority of network administrators have taken steps to protect their networks from attacks. Hacking such networks is not easy. Instead of going through the trouble, hackers simply create something that looks legitimate and pass it on to the users. With this, they obtain all the information they need from you.

A good example of this is a hacker who wants to collect some information like identification details, date and place of birth, and so forth. Such a hacker can create a loan app, make it look legitimate and put it up in an app store. From there, they market the loan app properly and make sure they get the right attention. Once you download this app, because perhaps you are looking for cheap loans with a bad credit score, you feed in all the important details you would provide the bank or any lender and wait for their feedback. You never get any feedback from the phishing hackers. They already have information like your full name, residential address, phone number, email address, and some even prompt you to create a password, which most people use the same password across all accounts. Some phishing hackers even clone emails and make them appear look like they come from a legitimate source like a well-known government entity or bank. To avoid these, always make sure you use official links from websites you access. When you are in doubt, call the official contact details to find out the truth before you provide your information to hackers. Some of the phishing attacks use keyloggers. Keyloggers can run in your system undetected. Today most keyloggers are built in such a way that they can send not just the key strokes, but some even capture screenshots and upload them to the hacker’s server without your knowledge. Since you might be typing a lot on the keyboard, keyloggers can be programmed to capture only specific information like emails and passwords, phone numbers and so forth.

Protecting your network While you are supposed to and can mitigate most, if not all of the network threats discussed, most network administrators cannot do this for different reasons. Some administrators assume certain risks are beyond their purview, and as a result, they do not give them the attention they require. To protect your network, you should perform the following: ● Active detection Active detection is a process where you scan your network all the time to detect intrusion. This should be an autonomous plan for any network administrator. Remember how you double-check your door before you leave, yet you have turned back and twisted the keys to the very end? The same applies to network security. ● Passive detection Passive detection practices require that you log all network activities and events on a file for review later on. One of the best examples of passive detection is installing CCTV cameras in the premise. You might not be watching the cameras all the time, but you are confident that they capture everything that transpires on the premises. If something is amiss, you can go back to the cameras later on and review. This is the same concept that applies with networks. In the event of any network issue, you can always go back to the event logs and try to find out what might have happened. ● Proactive protection Proactive protection is about preparing yourself for the worst possible scenario. You struggle to make sure that the network is incorrigible. All procedures and steps that you take to achieve this are

part of proactive protection mechanisms. Proactive protection is about vigilance. What can you do to protect your network from intrusion? Each organization must have some rules, regulations, procedures, or policies that guide its operations. When it comes to network security, nothing is ever too paranoid to be considered. Make sure you perform a network audit as recommended by an external auditor. An audit examines your network to determine whether all the components are safe. While an internal auditor can perform the audit, you need an external editor for industry certified standard audits. Ensure that you communicate the necessary security policies effectively, so that everyone is aware of their existence. This can be done in the form of a notification on the user devices. Something as simple as this “UNAUTHORIZED ACCESS IS PROHIBITED, AND IS PUNISHABLE BY LAW” displayed clearly can act as a constant reminder to users that they must stay in check. Any ports that are not in operation should be disabled. This way, guests in the office cannot use them. Someone might come in and plug their laptop into one of the free network ports, and in the process introduce a virus into the network without knowing it. It is good practice to reset network passwords as frequently as possible. Some organizations perform these changes on a monthly basis, while others do it as frequently as weekly or even daily. The password you use when you come to work in the morning expires at the end of the workday when you sign out, and you get a new one when you sign in to the office in the morning. Always make sure your network has firewalls running. Firewalls protect all the internet connections so that only those with warranted access can use the network resources. There are different firewall products in the market. Be sure to use one which suits your budget and is relevant to the size of your company. Keep your antivirus programs updated to the latest version all the time. Run system checks frequently to weed out potential threats to the network. Most organizations perform security upholstering over the weekend when the office is not busy, so that when everyone reports to their desks on Monday morning, the systems are ready.

Chapter 12 Managing and Troubleshooting the Network

From time to time you will need to troubleshoot the network. You can have this done on schedule or on impulse, in response to an immediate threat. More often the need for troubleshooting catches you off guard. At times it is the very simple issues that make things difficult in the network. More often than not you worry about a serious problem, struggling to understand the cause, only to realize it was something simple, and perhaps all you needed was to reboot the network. Network problems can overwhelm you. It gets worse when you have a problem at peak hours. Everyone on the network is unable to do their work until you sort out the problem. The pressure can be so intense, especially if you work in a fast-paced organization. The first step in troubleshooting a network is to identify and narrow down the possibilities. The network issue might be caused by one of many reasons. Narrow them down and eliminate them one by one, especially if you cannot deduce an immediate cause. For troubleshooting, no reason is ever too simple to be possible. Eliminate the possibility of a problem as a result of simple human errors. The following are the four key procedures that you should follow when troubleshooting a network concern: Check the network to ensure all the simple things are okay. Determine whether you have a software or hardware issue. Determine whether the issue is localized to the server or workstation. Find out the sectors in the network that are affected. These four steps will help you eliminate possible causes one by one until you identify the problem, and fix it. Let’s delve deeper into it.

Check the network to ensure all the simple things are okay At times it is the simplest explanation that might solve your problem. Before you worry yourself about complex reasons for the network issue, try and eliminate any possibilities of a very small problem. Many are the times when someone will call you frantically that they are unable to access their account on the network, only to realize that they had the Caps Lock on. While assessing the problem, ensure that the correct procedure is followed to access the network. Check to make sure the credentials are correct. Someone might be keying in the wrong details inadvertently. You’d be surprised the number of times people enter the wrong details and lock their accounts.

You can also create restrictions over the number of times users can sign into their devices. This alerts you when someone is struggling to access their accounts, and you can reach out and assist them accordingly. It might also come in handy and alert you when someone is trying to access a device they are not supposed to. ● Login problems In case your network problem is user-oriented, ensure their login credentials are correct. Where possible, try to sign into the account from a separate workstation. In case that works try it on the problematic workstation to rule out any other challenges. If all the possibilities mentioned do not work for you, go through the documentation for your network to determine whether there are any restrictions in place that you might not be aware of, and make sure the user is not in violation of any such restrictions. ● Power connection Check the power switch. Are all the devices that should be powered on running? There is always a risk that someone tripped on one of the cables and plugged it off the power source. You’d be surprised the number of times people complain about having a blank screen yet their computer is powered on, only to realize that the power cable to the screen was not plugged in correctly. ● Collision and link lights Check the collision light and the link lights. The collision light blinks amber in color. You should see it on the hubs or the ethernet network interface card. If this light is on, you have a collision on the network. For a very busy network, collisions are very common. However, if the light blinks frequently the collisions might be too much, affecting network traffic. Check to make sure the network interface card and any other network device are working properly, because one of them might have malfunctioned. The link light is green in color. If the link lights are on for the network interface card and the hub where the workstation is connected, this is a sign that communication between the hub and workstation is not interfered with. ● Operator problems Individual operators can have inhibitions that have nothing to do with the network, but lock them out and prevent them from accessing the network altogether. Perhaps the system you use is alien to the user. If they do not understand it, chances are high they will struggle to use it. Find out if the user has any challenges, and if so, walk them through it carefully so that they do not feel you undermine them or look down upon them. Explain to them why they are experiencing the problem. Be firm and make the user confident to reach out whenever they have a similar problem or any other. If you do not inspire confidence in the user, they may shy away from informing you of a problem, and instead attempt to solve them on their own, which only makes things worse.

Determine whether you have a software or hardware issue Hardware problems can be extreme. One of the devices might have outlived its useful life. Hardware problems might also mean you need to plan for data recovery or retrieval if the hardware fails. Fixes

for hardware problems involve replacing the devices, updating device drivers or tweaking the device settings. Troubleshooting software problems depend on the nature of the issue at hand. Most programs today are operated on a subscription basis. Perhaps the subscription has expired and was not renewed in good time, hence you are locked out of the system, or your user privileges have been limited to free user account terms. In such a case, follow up with the relevant parties and pay the subscription fee to restore full access. Remember that whether you are dealing with a hardware or software issues, you might need to backup your data. Ensure you have sufficient space for this.

Determine whether the issue is localized to the server or workstation Identifying the extent of the problem can help you know how severe it is. If it is a server problem, a lot of people will be affected, and you might have too much to deal with than if it was just one workstation. For a workstation problem, you can try to sign into that account from a different workstation in the same work-group. If that works, you can trace all the necessary steps to fix the problem. Check the connections, the cable, the keyboard and so forth. Chances are high that the problem might be simple.

Find out the sectors in the network that are affected Determining the sectors in the network that are affected by the problem is not going to be an easy task. There are many possibilities here. In case a lot of people on the network are affected, your network might be suffering from a network address conflict. Check your TCP/IP settings to make sure that all IP addresses on the network are correct. The problem comes in when any two sectors in the network share a subnet address. This causes a duplication in IP errors, and it might take you a while to realize the problem. In case everyone on the network has the same problem, it could be an issue with a server to which they are all connected. This is an easy one to solve.

Check the cables The way the network is set up could be causing you problems. If you have checked and realized everything else on the network is fine but the system is still down, you need to look at the cables. Ensure all the cables are connected to their appropriate ports. Patch cables between wall jacks and your workstations might need replacing. Most of the time people step on the cables, wheel over them with their chairs and so forth. If cables are run across the office floor, you might need to replace these, and probably consider a better way of running cables. There are several cable issues that you might be experiencing. Most of them are basic, but they are the foundation of your network, so you have to know about them. Here are some of the cabling issues you might experience: ● Interference Computers are highly susceptible to signal interference. Radio transmitters and TV sets interfere with

computers most of the time. These devices generate radio frequencies during transmission. To avoid this problem, ensure you use shielded network cables for the entire network. ● Shorts A short circuit might be caused by a physical fault in the cabling network. Today there are special tools that you can use to locate the short. More often than not, you will need to fix or replace the cable. ● Collisions If two devices on your network are communicating at the same time and on the same segment, there will be a collision. Collisions are possible if you are still using older ethernet networks, or hubs. Replace hubs in the workplace with switches where possible, because switches are intelligent and can help you prevent collisions on the network. ● Echo An echo is an open impedance mismatch. With cable testing equipment, you will know whether your cables are completing the circuit or not. Test to identify a bad connection. In case you experience an echo on all the wires at the same place, you might have a cut cable that needs replacing. Today some special testing equipment can show the exact location of a cut even if the cables are set behind the wall. ● Attenuation Attenuation is a situation where the medium within which signals travel degrade the signal. All networks experience this problem. The risk of attenuation depends on how you lay the cable. Take copper, for example. You should amplify the network by a switch or a hub after every 100 meters. If you use fiber optic, however, you get a longer distance before the network is degraded. Consider your organization needs, and if possible, use fiber optic cables instead of copper. However, if you cannot afford to use fiber optic cables, have a hub or switch in place accordingly to prevent attenuation. ● Cross talk Wires that are in proximity to one another experience cross talk when they transmit current. To reduce the risk of cross talk, paired wires are twisted and set at 90 degrees from one another. The tighter you have the wires twisted, the less crosstalk you will experience on the network.

Troubleshooting a wireless network Most users appreciate wireless networks today, especially because they are easy to access from a wide range, depending on the settings. Wireless networks also take away the problem of running cables all over the place. For network administrators, wireless networks might present one of the biggest challenges during troubleshooting. First, wireless networks are synonymous with configuration problems. More often when you have a problem with the wireless connection, you have to go through the steps discussed above to make sure the hardware is okay, then you get into troubleshooting the network. The following are some of the common challenges you might experience with a wireless network: ●

Encryption challenges

Encryption is mandatory to protect all communication across a wireless network. Each network uses a unique encryption process. Some networks use WPA2, others use WEP and so forth. For the sake of security, make sure you use the best encryption protocol for your network. To make work easier, always make sure everyone on the network has their devices configured with the same encryption. ● Interference Wireless networks transmit data packets and signals through radio waves. For this reason, they are more susceptible to interference than the cable networks. A wireless network might suffer interference from a Bluetooth device attached to a computer in the office. This is prevalent especially when the object of interference is in close proximity to the network. ● Channel problems A lot of wireless networks operate within the frequency range between 2.4 GHz and 5GHz. In between these frequencies, there are so many networks. Some channels are allocated more bandwidth than others, hence the reason why they are clearer and stable. Most of the time you will barely have an issue with channel configuration, unless someone intentionally or accidentally forces their device to use the wrong channel. ● Mismatched ESSID A wireless device will always search for Service Set Identifiers (SSID) in close proximity. It might also search for an Extended Service Set Identifier (ESSID). If you are operating in a building where there are so many ESSIDs, you might experience interference especially when one of these has a stronger broadcast than what you own. ● Frequency issues Each channel determines the frequency that the wireless devices must use. However, some devices allow you the freedom to set the device to a unique frequency. If you choose to configure the frequency manually, always remember to do the same for all the devices on the network. If you do not do that, the devices will not communicate. If you have too many devices to add onto the network, it is always safer to use the default setting. ● Distance The distance problem arises when the clients are too far from the network. One of the solutions here is to move the antenna or router as close to the clients as possible. If you are lucky to own a device with a very strong signal, you have to rethink the broadcast distance, because you might be susceptible to unwarranted access. ● Antenna placement The best setting for the wireless antenna is at the center of the wireless network, or as close to it as you can. However, in case this is not possible, you can also set an antenna far from the network, but connect a cable to it. Poor antenna placement translates to poor network performance, and in some cases, you might not even have network access at all. ● Bounce Bounce is popular in a wireless network that transmits signals over a wide area. To make sure that everyone has proper access to the network, it is advisable to install network reflectors or repeaters to

boost the network. However, you should only do this if you can control the network signals. Otherwise, you will end up creating a very large network, which becomes difficult to manage, and also susceptible to hacks.

Procedure for troubleshooting a network Having looked at all the possible ways of troubleshooting a network issue, the following are the appropriate steps you should follow: 1. Gathering information You cannot solve a problem without knowing what it is all about. Collect as much information as you can about the network problem. How long has the problem persisted? What are the challenges users are experiencing? Which part of the internet or network are they unable to access? Ask all the questions, even those that might seem insignificant might point you in the right direction. 2. Identify the affected sectors Whenever there is a network glitch, someone somewhere will be unable to do their work. Investigate to know who is affected, and how. During this process, if someone comes to you with a network problem, at times it might help to have them walk you through it from the beginning. You might just realize the cause of the problem in an instant. 3. Scan for recent changes If you follow through the problem with the user and manage to recreate the problem as they described it, it means that you can track any changes that might have taken place on the network in light of their recent activities. Take note of the error messages displayed as they might also help you diagnose and solve the problem. 4. Hypothesize the possible causes A hypothesis is about listing down the possible causes of the problem and then narrowing it down accurately to the right one. In some cases you might determine the problem immediately. However, in severe cases, some problems on the network are a culmination of so many other minor problems, so having a list of possible causes might help you diagnose and fix all the small ones as you build up to the larger issue. 5. Does the problem warrant escalation? While you should be able to fix most networking problems on your own, there are situations that might be out of your hand, and require you to escalate the issue to someone with more experience dealing such. The sooner you realize you cannot handle the problem on your own and escalate it, the better it will be for you because you can bring in the experts in record time. 6. Come up with a plan of action Having figured out the problem, communicate to the affected party that you are sorting it out. If

necessary, walk them through the process. Each solution should have an immediate effect, or an expected effect. Ensure you are clear in this description, so that the user can alert you in case they notice something different from the baseline performance after you fix their problem. 7. Monitor the results One of the biggest mistakes network administrators make is solving a problem and then assuming that everything else is okay. Each problem solution will always have a domino effect on something in the network. At times by solving one problem you might end up creating a larger problem. This is why you need to study the results to ensure that you can keep the rest of the network safe. 8. Documentation If everything works just fine, remember to document the process, and the solution. Earlier on we had looked at the importance of documentation. It will come in handy later on when the same problem occurs somewhere else. In the documentation, include the possible conditions that might have caused the problem. Remember to mention the software version in use. If you managed to reproduce the problem during testing, include this in the documentation. Mention all the solutions that you might have tried, and the effects, highlighting why you opted out of those solutions. Present the final solution that worked, and why you chose it as your best option.

Conclusion You are embarking on a journey that will get you so far, and change your life. The lessons you learn in this book will help you go a long way in your career as a networking expert. Once you are done reading this book, set aside some time and think about everything you have read. Each chapter offers useful information, and pointers that will guide you. One of the important things you need in networking is a practice lab, or a computer on which you can try your hands on some of the lessons you learn in this book. The world of networking is advancing and keeps developing over time. Some conventions might change in a few years. With this in mind, therefore, try and make sure you have access to some practice material to help you stay abreast with technologies in networking. If you have been in the corporate space for a long time, you will realize that staffing managers today focus more on applications over papers. You might have some really awesome papers but if you are unable to apply the knowledge learned and solve problems for the manager, they would not see the benefit of hiring you for the job. There is so much you can learn about networks and how to manage them effectively. At the moment, network security is one of the biggest concerns that a lot of organizations grapple with. You are expected to know how to deal with this. When hired, the decision makers in your organization believe that you have what it takes to protect and safeguard their network resources. The beauty of computing today is that there is so much evolution taking place. Things change so fast, yet somehow they remain the same. With in-depth knowledge of CompTIA Network+ you learn important lessons that will help you advance and evolve with technological advances as they happen. CompTIA Network+ prepares you not just by teaching you the necessary information you need to pass the exams, but also by showing you the hands-on approach to solving problems.

COMPTIA NETWORK+ Tips and Tricks to Learn and Study about The CompTIA Network+ Certification from A-Z

WALKER SCHMIDT

Introduction CompTIA Network+ serves as the basic exam that is designed to provide information about basic networking principles and guidelines. Network+ is one of the leading beginner-level certifications in the market today. By passing this exam, you will have learned the information about networking concepts, the skills required in this field, and the terminology used in this industry. All the network environments used in the industry are covered in this exam. This book serves as a benchmark in the industry for learning and studying the concepts which you require to pass the CompTIA Network+ exam. We aim to provide the updated concepts related to the Network+ exam, and we will also sweep through all the exam sections and provide tips on how to pass the exam with a good score.

Chapter 1 Overview of Networking Technologies

There are many network layouts that organizations use to perform their daily activities. As a good network administrator, you need to be aquatinted with all types of wired and wireless network topologies to ensure that the data is transmitted correctly. In this first chapter, you will get a clear understanding of various topologies used in organizations. Wireless and Wired Network Topologies If you have a general understanding of networking, you must be familiar with the word, ‘topology.’ Topology is the layout of the physical and logical network. The physical topology is the actual layout between different computers and networks. The logical topology is the concept of how the network recognizes the device within the formation. Network topology is a core concept when it comes to learning networking, and you need to memorize different types of wired and wireless topologies to pass the Network+ exam. Some examples of topologies used in the current market scenario are listed below: Bus Topology In a bus topology, all the computers are linked to the network with the help of a trunk or a backbone. You need to understand that a hub or switch is not required to connect all the computers in the bus topology; instead, T-connectors or taps are used to connect the computers. Ring Topology This can be considered as a logical ring in which all the computers are connected via wire in a circular motion. This is not a physical ring network, and no hub or switch is required for this topology either. You need to understand that if a section of the cable fails, the signal of the entire network is disrupted. This architecture is not suitable for the environments where you need to add or remove the computers more often. As described before, when a single system within the ring fails, the data of all the network is disrupted. In a ring topology, the troubleshooting is easy as the cable faults can be easily identified. Star Topology Star topology differs from ring and bus topology as a switch or hub is used to connect all the devices in the network. Each device in the network will be connected to the central hub with the help of a single cable. Star topology is recommended for places where you need to make constant changes in the network without disrupting the entire framework. You will need more cables to implement this environment, and if the central device fails, the entire system will come to a halt. Wired Mesh Topology This is a unique network design in which each computer is connected to every other computer in the network. This creates a point-to-point connection between all devices. The wired mesh topology is

used to ensure the uptime of the network. If any wire in the network fails, this will not create any problem for the other computers as each node in the system works as a relay point. The wiring in the wired mesh network is complicated, and the cabling cost is also high in this environment. Hybrid mesh is another variation of the mesh topology. In the hybrid topology, you will get the connection between specific computers in the network. When you are preparing for the Network+ exam, you need to ensure that you understand the topologies mentioned above and their uses in different settings. Among all the network topologies, mesh topology is the one in which you can easily identify the issues and do swift troubleshooting. Wireless Topologies Wireless networks can be implemented with the help of infrastructure, ad hoc, or mesh wireless topology. We have presented the important information related to the wireless topologies, which will help you prepare for section 1 of the Network+ exam. Ad Hoc Wireless Topology All the devices communicate with each other without any access point in the ad hoc topology. This is a peer-to-peer design, and this is commonly used to connect a small number of computers or devices. A temporary connection between the computers in the meeting or the connection of the printer for common file share is also a good example of ad hoc topology. Infrastructure Wireless Topology Infrastructure wireless topology is used to add the wireless network to a wired network. The communication between the wired and wireless system is made with the help of an access point, which serves as a wired LAN base station. The access point is not mobile and needs to stay connected with the wired network. With this access point, the wireless devices are connected to the wired system. You can add different access points to a single wired network to increase the range of the wireless network. The access point is also known as a wireless access point. Wireless Mesh Topology The wired mesh network is not commonly used due to the excessive use of the cables. The wireless mesh network does not require any cables and is one of the most common wireless networks used by corporate offices. In a wireless mesh network, the signal is originated by a single access point, and different devices are used to relay the signal. The wireless network is easy to implement and is very scalable. Wired and wireless topologies are core concepts in the networking field. Memorize these concepts and understand their usage to score good marks in the Network+ exam. Network Types Understanding the network types will help you to get a firm grip on the networking genre, and this will ultimately prepare you for the final Network+ exam. Learn these network types and their specifications here: LANs

LAN is the short form of Local Area Network, and it is designed for a small office building or school. The main function of a LAN is to connect the workstations to share the resources and files. This is much cheaper than WAN and covers a small area. WLANs People do not always want to rely on a wired network and need flexibility with the help of wireless networks. The WLAN is an independent network created with the help of an Ethernet device and wireless devices. The data transmission is conducted with the help of radio frequency signals. Different hotspots can be created, which can enhance the frequency of the signals. It is also very easy to add encryption in the WLAN. WANs A wide area network or WAN is a network that is created with the help of different individual LANs. WAN is usually slower than a LAN, and the devices used to create this network plays a vital role in the transmission speed of the data. MANs When a WAN is limited to a small metropolitan, it is referred to as MAN. There are no specific guidelines that differ between the WAN and MAN. MAN is usually smaller than a WAN, and ISPs and telecom service providers mostly use it. CANs CAN is the short form of a campus area network, and it consists of multiple local area networks. The CAN does not have to be built on campus or in a university, and many military bases and industrial complexes also use this network type to transmit data easily within a large area. After reading this chapter, you will understand the basic concepts related to networking, which will cover the exam objective titled “Networking Concepts,” and you will also have sufficient understanding to clear the exam sub-objective “comparing the network topologies and their types.” Networking is all about data transmission between different devices. Learn and study these concepts carefully as they will help you get a core understanding of this subject and pass the exam.

Chapter 2 IoT Device Networking

We live in a digital world, and all the devices in the world are connected via wired or radio frequencies. The devices, such as small appliances, water heaters, and thermostats, are interconnected, and these devices embedded in our daily lives are called the Internet of things. The main goal of this connection is to send and receive data at a faster pace. Many technologies are used to connect these devices. In this chapter, we will discuss these connections in detail: Z-Wave Z-Wave is a common communication protocol that is used to connect home cinemas, home access controls, window coverings, and most of the latest HVAC units. Z-Wave is used in both commercial and residential appliances and has become the standard hub controller and portal for Internet connectivity. You can easily connect 232 devices on a Z-Wave network. All the devices are paired with each other and can be easily recognized by the controller. A Danish company, Zen-Sys, developed this standard, and, today, more than 50 million devices are shipped all over the world that are Z-Wave compliant. Different frequencies are used in the Z-Wave model, and thus, the two common frequencies used in the USA are 916 and 908.4. You need to make sure to assign a unique 8bit Note ID within the network. Ant+ Ant+ is not as common as Z-Wave and is controlled by Garmin with the help of Ant+ Alliance. Most devices such as lightning systems, indoor and outdoor fitness devices licensed under Garmin are linked with the help of Ant+. It uses a 2.4 GHz range, and devices that are ANT-enabled comes with an application host MCU as well. Communication is made with the help of serial and bidirectional messages. Bluetooth Bluetooth is the industry leader when it comes to connectivity in a short-range. It uses 2.485 and 2.4 GHz band; the technology is very popular in the PANs and is based on IEEE 802.15.1 standard. In the world, there are more than 79 Bluetooth channels, and it is a packet-based protocol that operates under the master-slave system. The master can communicate with seven slave devices. Different versions of the Bluetooth include v1.0, v2.0, v3.0, and v4.0. The latest v5.0 is aimed for the communication between the IoT. NFC NFC, aka Near Field Communication, is short-range communication technology and requires the client to be closer to the access point. This is most commonly used for making payments via mobile devices. You can also bump two phones and transmit data between them. There is no hardcore standard when it comes to the coverage distance of the NFC technology.

IR You might have heard of infrared technology from a long time ago, and the most common example of infrared technology is the television remote control. The user sends the command to the receiver in the TV with the help of infrared. There has been a lot of advancement in IR technology, and it is controlled and managed by the infrared data association. The demise of the IR is due to the technological advancements in Bluetooth and Wi-Fi. The latest infrared wireless devices offer a great data transmission speed and can reach up to 16Mbps. This is one of the secure and low-cost cable replacement options. RFID The NFC is a new technology in the market, and it uses the old standards that were created for RFID. This helps you to connect the hardware and transfer passive electronic information with the help of radio waves. You need to understand the Proximity Reader term as this is the basic term which is used for all the ID and card reader in the technology. 802.11 802.11 is an overall term for IEEE wireless networking. There are many wireless networking types in the 802.11 term. For passing the Network+ exam, you should be acquainted with 802.11a, which provides the speed up to 54 Mbps and works under the 5GHz band. Apart from this, you should also learn about the IEEE 802.11b, IEEE802.11g, IEEE 8012.11n, and IEEE802.11ac. In the exam, you can expect performance-based questions of the different wireless and wired networking standards. FHSS Technology Frequency-hopping spread-spectrum utilizes narrowband signals, which can change frequencies based on the predictable pattern provided beforehand. Frequency-hopping refers to the transmission of data on different data bands. The signals hop to the new channels to increase the speed of the data transmission. As FHSS technology uses radio frequency bands, they have a robust resistance to different environmental factors and signal interference. This technology is used for broadband wireless access. DSSS Technology The Direct-sequence spread spectrum uses the entire transmission frequency pattern. For all the single bits of data, redundant data is also transferred using this technology. Sending redundant data ensures the security and the delivery of the data. Only a single copy has to complete the transmission for the data to be transferred successfully. DSSS provides better security and also ensures data transmission than FHSS technology. OFDM Orthogonal frequency-division multiplexing splits the signals and radio frequencies into 52 evenly spaces frequencies. This will help to reduce the crosstalk interferences as the data is divided into different frequencies and then transmitted to the receivers. OFDM is used for a single data sender, and the multiuser version of OFDM is known as OFDMA. OFDMA stands for orthogonal frequencydivision multiple access. This is a scalable model and can accommodate the data transmission needs of many users simultaneously.

After studying and learning the concepts of connectivity related to IoT, you will be able to attempt the Network+ objective 1.5, and this will help you to pass the exam proficiently. Having a clear understanding of these concepts is vital, so effectively learn them. These two chapters will make a foundation for the coming chapter, which will discuss the ports, protocols, networking services, and models.

Chapter 3 The OSI and TCP/IP Model

When you are studying and learning about networking, the most important concept that you should be acquainted with is the Open Systems Interconnect Reference model. This is a conceptual model that was first introduced by ISO in 1978, and it serves as the network architecture, which helps the data to be transmitted between computer systems. In this chapter, you will get a clear understanding of the OSI model, and we will also discuss the OSI model layers in the computer networking architecture. This chapter will cover the Network+ exam’s objective number 1, i.e., Networking Concepts, and the OSI model is a core part of the Network+ sub-objective 1.2. What is the OSI Networking model? When we talk about networking models and their basic architecture, two models stand out, which are TCP/IP model and OSI networking model. Both these models work as a benchmark for the theoretical, framework, and actual networking implementation. Let us dig deep into the OSI model and its seven layers. The OSI model consists of seven different layers built from bottom to top. The seven different layers of the OSI model are a physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. Every layer in the OSI model has a separate function. Physical Layer [Layer1] The physical layer of the OSI model helps to identify the network topology and the physical characteristics of the network. In the physical layer, the hardware of the network, such as the type of cables, the pinout format, and the type of controller used in the network, is identified. On top of that, the physical layer also helps us to identify the topology that is used in the network. You will also get information about the voltage and frequency of the signals and the suitable data operation range. This information helps in identifying the speed and bandwidth of the data, and you can also calculate the maximum distance on which the data can be transmitted. Data Link Layer [Layer 2] The data link layer is a bridge between the physical layer and the network. The data link layer is responsible for getting data to the physical layer and ensures that the data is transmitted to the network. The error detection, hardware issues, and the error correction are also completed in the data link layer. The two sublayers in the data link layer are discussed below: Logical Link Control Layer The LLC layer controls the errors and flow control in the data link layer. Medical Access Control Layer In this layer, the MAC address is defined. The MAC address is the physical address that is embedded

into the network interface card. The MAC sublayer is also responsible for the network media and its transmission. Different protocols and technologies operate in the data link layer, including PPP, PPTP< STP, and HDLC. Network Layer [Layer 3] The network layer is responsible for transmitting data from one network to another, which is also known as touring. The network layer does not generally transfer data from one network to another but provides the mechanism to complete this process. The software components provide the routing mechanism in the network layer, and all this data mechanism is controlled through routing protocols. The route selection is also completed in the network layer. The network layer determines the best way of transmitting the data by utilizing the MAC addresses and the routing protocols provided by the users. In networking, the routes by which the data is transmitted can be configured dynamically and statically. When you are adding the routes manually into the network, this is called a static routing environment. On the other hand, dynamic routing utilizes the routing information protocol. You can also use the open shortest path first (OSPF) in the dynamic routing environment. These protocols can help communication devices in the network to communicate with each other and determine the best routing technique. Transport Layer [Layer 4] The transport layer is responsible for communicating between different networks and effectively transporting the data. In the transport layer, the data can be transported in three ways which are: Error Checking: there are different protocols in the transport layer which help to identify the errors in the data which is received or sent. This ensures that the data is transmitted in the right fashion. Segmentation: when you are transferring the data, you need to break the data into small chunks so that the data can be easily transmitted. The segmentation or breaking down the data is completed in the transport layer. Service Addressing: Different types of data are transmitted through the network, and the transport layer ensures that the right data is passed to the upper layers of the OSI model In the transport layer, the protocols can be connectionless or connection-oriented. The data flow control is also an important aspect of the transport layer. Two common methods that are used in the transport layer for flow control are buffering and windowing. Session Layer [Layer 5] The data which is sent from one device to another needs to be synchronized, and the session layer is responsible for this synchronization. This is completed by establishing and breaking sessions. The connection is formulated in the transport layer, and the same function is repeated in the session layer for the applications. The server message block, network file system, and the NetBIOS are the protocols that are operated in this layer.

Presentation Layer [Layer 6] The presentation layer is responsible for converting the data received from the application layer to a different format. The conversion makes it easy to transmit data to other networks. The applications cannot read this conversion. Some of the different data formats that are handled by the presentation layer are graphic files, text and data files, and the sound/video files. All the GIF, TIFF, and JPEG files need to be converted in a certain format so that they can be transferred to another network. The same is the case with the text and data files, and the presentation layer converts the data to the ASCII and EBCDIC format. Apart from this, all the MP3, MIDI, and MPEG files are also converted so that the data is easily transmitted. The Presentation layer is also responsible for the encryption of the data. The data which is sent from the network can be read by anyone who intercepts the signals. With the help of this encryption, the data can only be decoded at the recipient end. Application Layer [Layer 7] The main function of the application layer is to collect the data and requests from the users and pass it along to the lower layers of the model. All the information is then sent to the application layer, which can be seen by the users. The application layer is not something that represents the application like web browsers or spreadsheets. It only defines all the algorithm that is used by the applications to use and alter the network services. All the network connectivity applications and devices can be adjusted in the OSI model. After reading and learning about the layers of the OSI model, you will be able to identify the right layer for the device and will also understand their function in the network. Understanding how OSI is different from TCP/IP Model The OSI model presents the framework of how the networking services work and how the data is transmitted from one network to another. On the other hand, the TCP/IP model is a four-layer model and is less complicated than an OSI model. The network interface layer in the TCP/IP model is referred to as the data link layer and physical layer in the OSI model. The four main layers of the TCP/IP model are the application layer, transport layer, internet layer, and network interface layer. Both the TCP/IP and OSI model are used in networking, and all the devices are set up according to their roles and functions within the model. Learn about these models as you will have to face questions about them in the Network+ exam.

Chapter 4 Ports, Protocols, and DNS

During the inception of the computing technology, no one thought about the interaction of the devices and developing the mechanism for their communication. Then came a time when we needed to connect the computers for sharing files and data. Nowadays, the thought of completing the work without the use of printers is a nightmare. Connecting the devices requires more than just some cables. You also need to make sure that you provide the set of rules on how the devices will communicate. This set of rules is known as a protocol. It would be fine if a single protocol could facilitate the communication between all the gateways and devices, but this is not the case with networking. You need to use a different number of protocols, and every protocol has its disadvantages and advantages. Here is the detail of the most common protocols used in networking. You will need to learn the salient features of these protocols if you want to work as a network administrator and pass the Network+ exam. Connectionless protocols vs. connection-oriented protocols There are two types of protocols that you need to be acquainted with. All the protocols designed to help the communication between the devices can be categorized in connectionless and connectionoriented protocols. The data delivery is almost guaranteed in the connection-oriented protocol. In this connection, the sending device will resend all the data packets which are not received by the other device. The communication does not break until the data is verified at both ends. This also requires a lot of bandwidth. On the other hand, the connectionless protocols only send the data and do not verify the data. There is no confirmation about the data, and if there is an error regarding the data, no mechanism is put into place to rectify this situation. A connectionless protocol does not require a lot of overhead and bandwidth. Videos and audio calls over the Internet mostly use connectionless protocols. Here are some of the basic protocols that you need to learn to attempt the Network+ exam. Internet Protocol The basic function of the Internet protocol is to transport data among the networks. You will define the IP in RFC 791, and data will be transported from one network to another. This is a connectionless protocol, and you will not get the guarantee that the data will be transported to the other network. If you need to make sure that the data is transmitted, you need to add more protocols such as TCP. RFC are the standards that are published by the Internet engineering task force, and every node and the Internet-connected systems are associated with an RFC reference number. Transmission Control Protocol TCP is a connection-oriented protocol. As being a connection-oriented protocol, TCP will ensure that both parties acknowledge that the data is properly transferred. TCP adds many features to the Internet connection, such as sequencing and flow control, and this helps in proper communication and

providing reliability to IP connections. This is one of the most reliable protocols which helps to ensure proper data transmission. User Diagram Protocol UDP is a transport protocol that is defined in RFC 768, and the data is transferred in a fire and forget mechanism. When the data is sent through this protocol, the protocol assumes that the data is received. It is the job of the upper layers to confirm whether the data is received or not. This is a connectionless protocol, and no connection is established between the sender and the receiver. UDP is more efficient than TCP and also uses less bandwidth. File Transfer Protocol File Transfer Protocol can help you in downloading and uploading the data on the remote FTP server. It also enables the users to see the data on the FTP server. When you have the necessary permission, you can also modify and delete the data as you see fit. FTP is defined in RFC 959, and the transmission control protocol is used to ensure that the data is properly transmitted. FTP servers are secure, and you can create different user profiles on the server to authenticate users. You also have the option to enable anonymous logins. This is fairly popular among people who look to download and upload the data files swiftly. FTP servers have become the need of organizations who are looking to change their database from time to time. Secure File Transfer Protocol FTP has been around for a long time, but the basic flaw in this protocol is that it is insecure. Simple hacking approaches can be used to penetrate the FTP server. FTP does not use any encryption when it sends data from one network to another. The packet sniffer can be used by the hackers to get the data while it is being transmitted. The secure file transfer protocol uses a secure shell technology that is applied on both server and client-side. This helps in securing the files. Trivial File Transfer Protocol TFTP is a file transfer mechanism like FTP, and it is defined in RFC1350. This ensures a smooth download, but you do not have the same liberty as FTP to scroll through the files in the system. This protocol ensures data transfer and operates on port 69. Simple Mail Transfer Protocol When you are preparing for the Network+ exam, you need to be acquainted with all these protocols. One such important protocol that you need to be acquainted with is SMTP. This is a simple protocol and requires that the host destination is readily available when transferring files. Hypertext Transfer Protocol HTTP is one of the most common protocols used in web browsing and can be used to transfer text, data, images, and videos from one server to another. The request for the data is made via port 80, and most applications can easily be used on HTTP servers. All the files are transferred via special languages such as HTML. This protocol will use a secure connection, and the data is safely transmitted. Though this protocol is used commonly among web developers, a downside to this protocol is that it provides the data in a simple text, which is vulnerable to hacking. A secure sockets

layer is added to the protocol to ensure that the data is protected. Internet Message Access Protocol Version 4 The Internet message access protocol version 4 provides a mechanism for collecting the mails from the server, and this also helps in downloading the data from the attachments. The mails are transferred from network to network with the help of STMP, and there is no concrete mechanism to read these emails. These emails are made readable with the help of this protocol. This is a very popular mechanism, and many users access their emails with the help of IMAP4 clients. Internet Control Message Protocol The main function of the Internet control message protocol is error checking and reporting. The ICMP is defined in RFC 792, and this is the tool that is used to provide surety about the proper delivery of the data. This protocol is also used for the ping utility. This is a function in which the ICMP sends a ray of echo requests to the remote host. When the echo is received, the rays are called back, and this gives an idea of how strong the connection is between two networks. This is a useful protocol, and it usually operates in the background. When the host cannot receive the data in the high speed as sent by the sender, this protocol will give it a signal to slow down the data transmission as well. Network Time Protocol Everybody knows that time synchronization is very important in today’s Internet era. This helps the delivery systems and email servers to work perfectly. This protocol is defined in RFC 958 and operates under the TCP/IP protocol. This protocol will analyze the time of different networks and make sure that the systems are in alignment. There are different methods by which the protocol identifies the time. Sometimes the GPS devices and radio clocks are used to identify the time, and in other instances, it uses the time on the BIOS clock. This ensures that the data is synchronized, and the user can keep track of the changes that are made. Lightweight Directory Access Protocol Lightweight Directory Access Protocol is the protocol that is used to define the mechanism to access the directory system. In the Network+ exam, you will have to work with the Linux/UNIX based directories. LDAP can execute the command line queries, and different utilities can also be added to the system to improve authentication and protection of data. When SSL is applied to LDAP, it is known as LDAPS, and with this additional layer of security, you will be sure that the commands executed by the protocol are secure and hackers cannot penetrate them. Simple Network Management Protocol The Simple network management protocol uses port 161, and it helps all the communication devices to provide the information to the central system. Apart from this, the SNMP also helps the central system to pass the configuration parameters to the connected communication devices. The central system in the SNMP is known as the manager, and all the communication is passed through the central system. The SNMP agent is linked with the SNMP manager with the help of an IP address. There are different components of the SNMP that you need to be acquainted with if you need to understand this protocol. Many SNMP configured devices are used to communicate and monitor the performance of data transmission. The management system, the agents and the host control the data flow in this protocol.

Session Initiation Protocol Making a long-distance call is always expensive, and it is also very costly to maintain commercial phone lines. You will have to spend a lot of money on keeping the phones operational for longdistance calls in the commercial setup. Due to this, people are moving to VoIP services. The voice is sent and received with the help of data packets, and this helps you to avoid the cost of making phone calls. Like other communication mediums, VoIP uses different protocols to complete the job. One of the most important protocols used to make VoIP is called session initiation protocol. With the help of SIP, you will be able to make conference calls, audio/video calls and indulge in online gaming. SIP cannot make the connection on its own and needs the TCP as the transport protocol. Remote Desktop Protocol When you need to make remote connections with the computer, you will need to take help from the remote desktop protocol. The RDP works on port 3389 and then connects the client system with the server. Nowadays, the option of remote connection is available in different systems, and all these connections are made with the help of RDP. RDP only operates on low-bandwidth and can be used to send the keystrokes, bitmap images, and mouse movements. Address Resolution Protocol This protocol is defined in RFC 826, and this protocol’s main function is to resolve the IP addresses and convert them into MAC addresses. When two networks connect, the network identifies the IP address of the other network to identify the sender. IP addresses are mostly linked with the local networks, and the ARP cache helps to understand the corresponding entry within the MAC address. When the communication is made with the help of MAC addresses, it is easy to connect the devices and send and receive data among them. Network Services The network services are responsible for enabling the network to operate with each other. There are many network services available in the world, but you will only need to understand the DNS, NTP, IPAM, and DHCP for passing the Network+ exam. Domain Name Service DNS is defined with the help of the TCP/IP protocol. The DNS resolves the hostnames and makes it easy for the people to remember the names and references to the most used hostnames. The IP addresses are hard to remember, and this is why DNS is more popular among tech professionals. DNS proves to be a viable solution to the HOST name problem that people had to face in the past. DNS Namespace This is a hieratically organized namespace, and different parameters are used to make sure the people can easily identify the URL and the specifications of the domain. Some top-level domain names and their intended purposes are discussed here. Com is the top-level domain name, which is mostly used by commercial organizations. Gov is the top-level domain name, and government organizations mostly use it. Some countries have also made country-specific domain names, i.e., DE is a countryspecific domain name for Germany.

When you are using the Internet, you do not need to worry about the DNS. The Internet service provider will handle all the issues related to DNS, and you will not need an Internet DNS server. The DNS server, which is not located within the organization, is known as the external DNS server. When you have a large computer base on the system and need to build complex structures within the organization, you will need the internal DNS server. Many companies also offer DNS services, and this is a cost-effective solution. You can take Google’s service and snub the internal DNS server to save money in the long run. Windows Internet Name Service On the Windows network, a system called WINS can be used to enable the NetBIOS names. It is important that the NetBIOS names to be resolved in the system as this will help the computer to find the programs with the help of NetBIOS names instead of IP addresses. There are three common ways to perform the NetBIOS name resolution, and the simplest way is to install the WINS server on the computer, and it will complete the process of name resolution easily. Dynamic Host Configuration Protocol You can also use the static addressing technique to assign the IP addresses in a file. This process will take the manual assigning of the addresses, and you will also need to provide permission to the host for always using the address. This is a very tricky method, and one small mistake can ruin the entire process. On the other hand, the DHCP uses predefined groups to assign IP addresses. This is the process that is defined in RFC 2131 and tall the IP addresses, which are also known as scopes, are identified by running the DHCP server application. The DHCP server is capable of providing many applications to the users and services, depending on how you implement the DHCP server. There are many advantages to using DHCP. This will help you save the energy of the resource as you will be saved from manual addressing. This process will entirely remove human error, and the reconfiguration process is also very easy. The DHCP server needs to be installed on the host computer, and this will prove to be a maintenance-free service and will only require occasional oversight. IP Address Management IPAM will help you to plan, track, and manage all the IP addresses that are used in the network. IPAM makes use of DNS and DHCP, and this ensures that all the devices are working in a proper manner, and all the changes are accounted for. Network Time Protocol Network Time protocol, as discussed earlier, help to synchronize the information among all the devices on the network. All computers will be working on packet switches and will have variable latency and keeping all the data in a synchronized manner will be deemed valuable to both server administrators and end-users. Make sure you learn and study these network protocols and applications. These network protocols will help you to pass the Network+ objective 2.0.

Chapter 5 Addressing and Routing

There is no doubt that the TCP/IP is the most used protocol in the world today, and this is one of the most important topics to study for the Network+ exam as well. Given its importance, we have compiled this chapter to ensure that you understand addressing in the protocol. After that, we will also highlight the routing and switching parameters in general. IP Addressing If you understand networks, you will know that IP addressing is one of the most complicated things to complete. It can make even the most seasoned administrators baffled. You will only need a fundamental knowledge of IP addressing to pass the Network+ exam, and in the later sections, we will discuss the basics of IP addressing and the latest version and their variants. When the devices are connected via TCP/IP, every computer needs to be assigned a unique address. The address will define how many devices are connected with the network and on which node they are connected. All the devices must have different node addresses on the network. IPv4 and IP address Classes In an IPv4 address, you will find four unique sets of binary bits. These eight binary bits are known as octets. This results in a 32 bit IP address. All the IP addresses are grouped in the division, which is known as classes. Each address will have a fixed-length subnet mask, which will separate the host from all the other nodes. Subnetting When you understand how the IPv4 addresses are used, you can understand the process of subnetting. With the help of the subnetting process, you will be able to make more networks than a generic subnet mask. Subnetting helps you to use the IP address ranges more effectively. It also makes the IP networking more secure as it adds the mechanism to add a subnet mask in the network. With the help of variable-length subnet masking, you will have the option of adding different subnet masks in the same network. How IPv4 differ from Private Networks One of the main differences between private networks and public networks is that the private network is a tightly controlled environment, and the connection to the public network is limited. On the other hand, the nodes in the public network are carefully considered and have more autonomy than the private network. You will understand by now that the network communicates via unique addresses on a TCP/IP network. This address will help to communicate with each device on the network and will separate the host from the other nodes. On a private network, you will get approximately 100 nodes on logical networks and addressing is not very complicated. When you are planning to do addressing on a large scale platform such as the Internet, complications may arise. You can ask your Internet service provider about the IP address of the device. Based on the business you are conducting, you

can ask for more IP addresses from the ISP. Many ISP providers provide a block of IP addresses to the users, but you will not be able to use these IP addresses if you change your Internet service provider. Private Address Ranges Some private addresses are set aside for private networks so that the Internet is saved from incorrect networks. These addresses are commonly known as private ranges. They are special addresses, and the Internet routers will ignore the data packets with these addresses. When you are studying for the subnet masks, you can expect questions about private ranges and how they differ from the public ranges. Default Gateways Default gateways can help the device to reach the nodes on other networks on which they are not specifically configured. Most companies use default gateways other than the specifically configured route. All the configuration paths are not defined in the devices, and the default gateway is the best way for devices to communicate with other devices on the Internet. In the Network+ exam, you will be asked about the function of default gateways. When someone generates a query to send the data to another network, the system determines whether the device is located on the same network or the remote network. If the device is on the other network, the device makes use of the routing table and looks for the entry for the network. When the device does not find an entry on the network, it sends the data via default gateways. The default gateway is simply a path to send out the data from the network. The router can act as the default gateway in some instances. IPv4 Address Types and Virtual IP This is the IP that is given to multiple applications and mostly has increased availability purposes. When you receive a lot of data packets, many will be routed to a virtual IP, and they will identify and send the data to the respective networks. IPv4 addresses are of three basic types, which are multicast address, broadcast address and unicast address. IPv6 Addressing IPv4 addresses were formulated 30 years ago, and since then, they have served their purpose well. The creators of IPv4 did not know what the future would hold, so they did not anticipate a humongous number of IP addresses. An IPv4 address has an 8 bit converted to a decimal position, but on the other hand, the IPv6 address has 16-bit boundaries and is separated by a colon. IPv6 addresses can be divided into site-local addresses, multicast addresses, anycast addresses, unicast addresses, global unicast addresses, and local-link addresses. Assigning IP addresses After studying the IPv4 and IPv6 addresses, you have come to know that the TCP/IP addresses have unique addresses, and in this section, we will discuss how you assign unique IP addresses to the devices. This section will cover the sub-objective 1.3 and 1.4 of the Network+ exam Static Addressing This is the manual addressing of the IUP addresses to a system. This approach has been on the market

for many years, but you will face some problems in this addressing. Static addressing for a single computer is easy, but when you are planning to add IP addresses to a large number of systems, then you will surely have issues. When the IP addresses of a network are not properly configured, then the devices will not be able to connect to other networks. When the IP addresses of the organization change, the employees will again change the addresses. When there are thousands of computers in the organization, it will take a lot of time to reconfigure all the computers. Dynamic Addressing Dynamic addressing is the automatic addressing of the IP addresses. In almost all the modern devices, this procedure is completed by the DHCP, which is a part of the TCP/IP suite. This will eliminate the burden on the employees, and the company will save a lot of time with this method. One of the main responsibilities of the DHCP is to generate the IP addresses for the client systems. The DHCP server can be configured in many ways, and you can also configure it to assign more than one IP address using the DNS, default gateway, and subnet mask information. Addressing, routing, and switching are the core concepts that you need to learn when you are planning to attempt the CompTIA Network+ exam. Apart from this, you also need to learn about Boot Protocol, how to automatically address private IP, how to identify the MAC address, how to manage the TCP/IP configuration, routing tables of all the switching methods such as packet switching and circuit switching. We will discuss some important topics so that you can get a clear understanding of what to expect when these topics are brought up. Routing Tables This is a benchmark that is consulted when you are planning to send data. This table will identify the best possible way for data transmission. This is the routing table of the computer. Managing the entire routing table is essential to send the data proficiently. When you need to view the routing table on the client system, you can use the command “route print.” Some basic information that you will receive from the routing table is network destination, the netmask, the gateway, which can be the router or another system, the interface, and the measurement of the route. This is the basic way by which the data is transmitted to other networks. The routing table needs to be updated and should be complete. The router gets the information of the routing table with the help of dynamic routing and static routing. In static routing, the routing tables are adjusted manually, and when there is a change in the network layout and topology, the routing table needs to be adjusted again. Switching Methods For frameworks to convey on a system, the information needs a corresponding way or various ways in which to travel. To enable elements to impart in these ways move the data starting with one area then onto the next and back. This is the capacity of exchanging, which gives correspondent pathways between two endpoints and oversees how information streams between them. The following are two of the more typical exchanging strategies utilized today: Parcel Switching In parcel exchanging, messages are broken into little pieces called bundles. Every parcel has a doled-

out source, goal, and middle of the road hub addresses. Bundles are required to have this data since they don't generally utilize a similar way or course to get to their proposed goal. Alluded to as autonomous directing, this is one of the benefits of parcel exchanging. Autonomous directing empowers better utilization of accessible transmission capacity by giving bundles a chance to venture out various courses to maintain a strategic distance from high-traffic zones. Free directing additionally empowers bundles to take an elective course if a specific course is inaccessible for reasons unknown. In a bundle exchanging framework, when parcels are sent onto the system, the sending gadget is answerable for picking the best way for the parcel. This way may change in travel, and the getting gadget can get the bundles in an irregular or non-sequential request. At the point when this occurs, the getting gadget holds up until every one of the information bundles is received, and afterward, it remakes them as per their inherent arrangement numbers. Two kinds of bundle exchanging strategies are utilized on systems: Virtual-circuit bundle exchanging: A consistent association is set up between the source and the goal gadget. This intelligent association is built up when the sending gadget starts a discussion with the getting gadget. The consistent correspondence way between the two gadgets can stay dynamic for whatever length of time that the two gadgets are accessible or can be utilized to send parcels once. After the sending procedure has finished, the line can be shut. Datagram parcel exchanging: Unlike virtual-circuit bundle exchanging, datagram parcel exchanging doesn't set up a consistent association between the sending and transmitting gadgets. The parcels in datagram bundle exchanging are freely sent, implying that they can take various ways through the system to arrive at their proposed goal. To do this, every bundle must be separately routed to decide its source and goal. This strategy guarantees that parcels take the most straightforward potential courses to their goal and dodge high-traffic regions. Datagram parcel exchanging is principally utilized on the Internet. Circuit Switching Rather than the bundle exchanging strategy, circuit exchanging requires a devoted physical association between the sending and accepting gadgets. The most ordinarily utilized relationship to speak to circuit exchanging is a phone discussion where the gatherings included have a devoted connection between them for the term of the discussion. When either gathering detaches, the circuit is broken, and the information way is lost. This is a precise portrayal of how circuit exchanging functions with system and information transmissions. The sending framework builds up a physical association, and the information is transmitted between the two. At the point when the transmission is finished, the channel is shut. Some reasonable focal points to circuit exchanging innovation make it appropriate for specific applications, for example, open exchanged phone organize (PSTN) and Integrated Services Digital Network (ISDN). The essential bit of leeway is that after an association is set up, a predictable and solid association exists between the sending and getting gadgets. This takes into consideration transmissions at an ensured pace of movement. Like all innovations, circuit exchanging has its drawbacks. As you may envision, a committed

correspondence line can be wasteful. After the physical association is built up, it is inaccessible to some other sessions until the transmission finishes. Once more, utilizing the telephone call similarity, this would resemble a guest attempting to contact another guest and getting an engaged tone. Circuit exchanging along these lines can be laden with long association delays.

Chapter 6 Network Devices

All the network devices are used to provide connectivity, and this helps the computer to connect to different networks and communicate easily. You need to have a clear understanding of how the devices work and also understand their functions as well. A good network administrator is wellversed with all the latest devices, and learning the basic functions of the devices is very important to pass the Network+ exam. This chapter will cover the objectives 2.2 and 2.3. Common Networking Devices This section is like a catalog of the network devices. All the devices which are used in the current market scenario are listed in this section. Firewall The firewall is a networking device that works as the control access to the company’s network. This can be either software or hardware-based. This controlled access will protect your system from an outside threat, and in this way, all the data and resources of the company will be protected. The main function of the firewall is to protect the local network from the public network, but with some configuration, you can also differentiate segments of the same network. Firewalls are usually implemented with the help of software, and it needs to be linked with the dedicated hardware devices. The firewall is generally used to allow or block the data packets from the network. Most routers and access points have the firewall built in them. Router A router is the networking device that is used to create a large network by connecting two networks. You can either use a dedicated router or use a computer as a routing device when connecting to the Internet. A router, as the name suggests, is used to route the data from one network to another. When the router receives a data packet, it identifies the destination address by checking the routing table, and the data is sent to the route. The router will send the data to the destination computer, or it will be sent to another router, and the process of routing will continue. Switch Switches like the hubs are the entry or connectivity points of the Ethernet network. You can connect different devices with the switches with the help of twisted-pair cabling. The main difference between the switches and hubs is that the service they provide. A hub will transfer the data to all the ports of the device, but the switch will only send the data to the active devices. The process is completing by identifying the MAC address or the receiving device. When the MAC address matches, the data is transmitted. This is why using the right MAC address with the devices is very important. Hub Hubs are at the bottom of the food chain of networking. Hubs are utilized in systems that can help

through twisted-pair cabling to associate gadgets. Hubs additionally can be joined to make bigger systems. Hubs are straightforward gadgets that send and receive information parcels to all gadgets associated with the hub, paying little respect to whether the information bundle is bound for the gadget. This makes them wasteful gadgets and can make a presentation bottleneck on occupied systems. In its most essential structure, a hub does nothing aside from giving a pathway to the electrical sign to go along. Such a gadget is known as a latent hub. A functioning hub can support information before sending it. In any case, a hub doesn't play out any preparing of the information it advances, nor does it take part in any error checking. Hubs arrive in an assortment of shapes and sizes. Little hubs with five or eight association ports are usually called workgroup hubs. Others can enable a larger quantity of gadgets (typically up to 32). These are called high-thickness gadgets. Bridge As the name infers, a bridge interfaces two systems. Crossing over is done at the initial two layers of the OSI model and varies from directing in its straightforwardness. With directing, a bundle is sent to where it is expected to go while connecting; it is sent away from this system. At the end of the day, if a parcel doesn't have a place on this system, it is sent over the extension with the presumption that it has a place there as opposed to here. If at least one portion of the crossed over the system is wireless, the gadget is known as a wireless extension. Modems A modem (short for modulator/demodulator) is a gadget that changes over the computerized sign produced by a PC into a simple flag that can go over traditional telephone lines. The modem at the less than desirable end changes over the sign once more into a configuration that the PC can get to it. Modems can be utilized as a way to associate with an ISP or as a system for dialing up a LAN. Modems can be inside, including extension cards or incorporated with the motherboard, outside gadgets that associate with a framework's sequential or USB port, or restrictive gadgets intended for use on different gadgets, for example, portables and handhelds. Wireless Access Point The term access point can be utilized for either a wired or wireless association. However, in actuality, it is quite often connected uniquely with a wireless-empowering gadget. Wireless access points (APs) are a transmitter and recipient (handset) gadget used to make a wireless LAN (WLAN). APs are normally a different system gadget with an inherent receiving wire, transmitter, and connector. APs utilize the wireless framework arrange mode to give an association point among WLANs and a wired Ethernet LAN. APs, likewise, as a rule, have a few ports, giving you an approach to extend the system to help extra customers. Contingent upon the size of the system, at least one AP may be required. Extra APs are utilized to enable access to increasingly wireless customers and to extend the scope of the wireless system. Each AP is restricted by a transmission go— the separation a customer can be from an AP and still

get a usable sign. The real separation relies upon the wireless standard utilized and the impediments and ecological conditions between the customer and the AP. Saying that an AP is utilized to stretch out a wired LAN to wireless customers doesn't give you the total picture. A wireless AP today can give various benefits, notwithstanding only an access point. Today, the APs may give numerous ports that can be utilized to build the system's size effortlessly. Frameworks can be added to and expelled from the system with no impact on different frameworks on the system. Additionally, numerous APs give firewall capacities and Dynamic Host Configuration Protocol (DHCP) administration. At the point when they are snared, they give customer frameworks a private IP address and afterward avoid Internet traffic from accessing those frameworks. Thus, as a result, the AP is a switch, DHCP server, switch, and firewall. APs come in all shapes and sizes. Many are less expensive and are structured carefully for home or small office use. Such APs have low-controlled radio wires and restricted extension ports. Better quality APs utilized for business purposes have powerful receiving wires, empowering them to broaden how far the wireless sign can travel. Contingent upon the transformation being done, the converter can be a little gadget, scarcely bigger than the connectors themselves, or a huge gadget inside a sizable skeleton. The reasons why not utilizing similar media throughout the system, and the explanation requiring a converter is that it can be costly (slowly moving from persuading to fiber), have dissimilar portions (associating the workplace to the industrial facility), or to run a specific media in a setting (the requirement for fiber to diminish EMI issues in a little piece of the structure). Wireless Range Extender A wireless go extender (additionally called a repeater or promoter), can intensify a wireless signal to make it more grounded. This builds the separation that the customer framework can be set from the access point and still be on the system. The extender should be set to a similar channel as the AP for the repeater to take the transmission and rehash it. This is a successful procedure to expand wireless transmission separations. Multilayer Switch In the start, all networking devices manufactured had different functions to perform. Devices like switches, hubs, routers, and switches existed in the same space and performed different functions. A multilayer switch operates in layer two and layer 3 of the OSI model and can perform functions of both routers and switches. You also need to be acquainted with the content switch device. The content switch is a costly device, and it is not commonly used in the networking world. This device will identify where the content needs to be sent, and this targeted device uses the simple mail transfer protocol and is sent with the help of the SMTP server. Wireless Controller The wireless controller is used for the authentication of the remote offices. When an access point is linked with the computer, this controller will authenticate the device and only after that is the data sent or received. This device is used in the scenario where two or more networks are connected, and you need to limit the data sharing.

Load Balancer All the load of the network is placed on the network servers. They are depended on to hold and disseminate information, look after reinforcements, secure system correspondences, and the sky is the limit from there. The load of servers is regularly a ton for a solitary server to keep up. This is the place load adjusting becomes possibly the most important factor. Load adjusting is a system wherein the workload is appropriated among a few servers. This element can take systems to the following level; it expands, arranges execution, dependability, and accessibility. A load balancer can be either an equipment gadget or a programming code designed to adjust the load. IDS/IPS An interruption discovery framework (IDS) is a latent identification framework. The IDS can recognize the nearness of an assault and afterward log that data. It can additionally caution a manager of the potential danger. The chairman, at that point, breaks down the circumstance and takes restorative measures if necessary. A minor departure from the IDS is the Intrusion Prevention System (IPS), which is a functioning recognition framework. With IPS, the gadget ceaselessly examines the system, searching for unseemly action. It can close down any potential dangers. The IPS searches for any known marks of basic assaults and naturally attempts to avert those assaults. An IPS is viewed as a functioning/receptive safety effort since it effectively screens and can find a way to address a potential security danger. Proxy Server These servers regularly are a piece of a firewall framework. They have become so incorporated with firewalls that the qualification between the two can now and then be lost. In any case, intermediary servers play out a special job in the system condition—a job that is isolated from that of a firewall. For the reasons of this book, an intermediary server is characterized as a server that sits between a customer PC and the Internet and takes a gander at the website page demands the customer sends. For instance, if a customer PC needs to access a website page, the solicitation is sent to the intermediary server instead of legitimately to the Internet. The intermediary server initially decides if the solicitation is proposed for the Internet or a web server locally. If the solicitation is planned for the Internet, the intermediary server sends the solicitation as though it started the solicitation. At the point when the Internet web server restores the data, the intermediary server restores the data to the customer. Albeit a postponement may be initiated by the additional progression of experiencing the intermediary server, the procedure is, to a great extent, straightforward to the customer that started the solicitation. Since each solicitation a customer sends to the Internet is diverted through the intermediary server, the intermediary server can give certain usefulness well beyond simply sending solicitations. These days, speed is everything, and the capacity to rapidly access data from the Internet is a significant worry for certain associations. Intermediary servers and their ability to store web substance suit this requirement for speed. A case of this speed may be found in a study hall. If an instructor requests that 30 understudies access a particular uniform asset locator (URL) without an intermediary server, every one of the 30 solicitations would be sent into the Internet and exposed to deferrals or different issues that could

emerge. The study hall scene with an intermediary server is very extraordinary. Just one solicitation of the 30 discovers its way to the Internet; the intermediary server's store fills the other 29. Website page recovery can be practically momentary. Nonetheless, this storing has a potential disadvantage. At the point when you sign on to the Internet, you get the most recent data, yet this isn't generally so when data is recovered from a store. For some website pages, it is important to go legitimately to the Internet to guarantee that the data is cuttingedge. Some intermediary servers can refresh and restore site pages. However, they are constantly one stage behind. A switch intermediary server is one that dwells close to the web servers and reacts to requests. These are for load adjusting purposes because every intermediary can reserve data from various servers. VPN Concentrator A VPN concentrator can be utilized to build remote-access security. This gadget can set up a safe association (burrow) between the sending and accepting system gadgets. VPN concentrators add an extra level to VPN security. Not exclusively, they would be able to make the passage, and they likewise can confirm clients, scramble the information, manage the information move, and control traffic. The concentrator sits between the VPN customer and the VPN server, makes the passage, confirms clients utilizing the passage, and encodes information going through the passage. At the point when the VPN concentrator is set up, it can build up a safe association (burrow) between the sending and getting system gadgets. AAA/RADIUS Server Among the potential issues organized chairmen face when executing remote access are use and the load on the remote-access server. As a system's remote-access execution develops, dependence on a solitary remote-access server may be incomprehensible, and extra servers may be required. Sweep can help in this situation. Span works as a customer/server framework. The remote client dials into the remote-access server, which goes about as a RADIUS customer, or system access server (NAS), and associates with a RADIUS server. The RADIUS server performs verification, approval, and inspecting (or bookkeeping) capacities and returns the data to the RADIUS customer (which is a remote-access server running RADIUS customer programming); the association is either settled or dismissed dependent on the data got. VoIP PBX and Gateway At the point when phone innovation is hitched with data innovation, the outcome is called communication. There has been a gigantic move from landlines to Voice over IP (VoIP) for organizations to save cash. Perhaps the greatest issue with the organization of this is security. By having the two information and VoIP on a similar line, they are both powerless in the event of an assault. Standard phone frameworks ought to be supplanted with a securable PBX. A VoIP gateway, also known as a PBX gateway, can be utilized to change over between the

inheritance communication association and a VoIP association utilizing SIP (Session Initiation Protocol). This is alluded to as an "advanced gateway" because the voice media are changed over simultaneously. Content Filter It is a product that controls what a client is permitted to examine and is regularly connected with sites. Utilizing a substance channel, a business can stop access to obscene websites to all clients, a few clients, or even only an individual client. The channel can be applied as programming on customer machines (known as customer side channels), on an intermediary server on the system (a server-side channel), at the Internet specialist co-op (ISP), or even inside the web index itself. The last is most regularly utilized on home machines.

Chapter 7 WAN Technologies

When you think of networking, you should not only think about connecting a few devices in the office. There is another side to the story that you need to learn, which is called the Wide Area Network. This chapter will help you understand the basic technologies that are used in the WAN and shows how you can compare their features and specifications. Integrated Services Digital Network ISDN has, for some time, been an option to the slower modem WAN associations, however, at a greater expense. ISDN empowers the transmission of voice and information over the equivalent physical association. ISDN associations are impressively quicker than normal modem associations. To access ISDN, a unique telephone line is required. This line, for the most part, is paid for through a month to month membership. You can anticipate that these month to month expenses should be fundamentally higher than those for customary dial-up modem associations. To build up an ISDN association, you dial the number related to the getting PC, much as you do with a traditional telephone call or modem dial-up association. A discussion between the sending and accepting gadgets is then settled. The association is dropped when one end detaches or hangs up. The line pickup of ISDN is quick, empowering an association with being set up, or up and running, considerably more rapidly than a customary telephone line. Fiber, SONET, and OCx Levels Communications Research addressed the test by creating Synchronous Optical Network (SONET), a fiber-optic WAN innovation that conveys voice, information, and video at paces beginning at 51.84 Mbps. Ringer's principal objective in making SONET was to make an institutionalized access technique for all transporters inside the recently focused U.S. showcase and to bring together various norms around the globe. SONET is fit for transmission speeds from 51.84 Mbps to 2.488 Gbps and past. The Passive Optical network is one of the least used networks in the market. In this network, the optical splitters are used to break the fiber so that the networking is done in different locations. When the passive optical network is used with wavelength division, the setup is named as WDM-PON Frame Relay You need to know the history of x.25 if you want to understand the working and main function of the frame relay. X.25 was the pioneer packet switching module, but it has been replaced by frame relay in modern times. The x.25 was originally developed by the telephone companies so that the digital voice can be sent via copper lines. As there were many companies behind the development of X.25, the module served many purposes and had no compatibility issues. This is one of the oldest modules

for packet switching and was used as a global standard to transfer digital data via copper lines all over the world. The downside of x.25 is that you will only get a transfer rate of 56 Kbps, which was fine for the time of its inception, but due to the high transmission needs of the 21st century, X.25 seems to be slow and has less use. Being a packet-switching module, the X.25 uses the best route to send and receive data and ensure that the fastest speed of data transmission is achieved in a given time frame. Due to the slow speed of the X.25, companies started relying on the Frame Relay technology. This is a protocol that works in the data link layer and physical layer of the OSI model. Frame relay is used to send high-speed data on public data networks. This is a modified version of X.25 and provide swift data transmission and use small data packets. Non-concurrent Transfer Mode Asynchronous Transfer Mode (ATM) was presented in the mid-1990s and proclaimed as a leap forward in innovation for systems administration since it was a start to finish arrangement, running being used from a work area to a remote framework. Even though it was advanced as both a LAN and WAN arrangement, ATM didn't satisfy its expectations due to related usage costs and an absence of guidelines. The presentation of Gigabit Ethernet, which offered incredible transmission paces and similarity with existing system frameworks, further hosed the energy of the ATM fleeting trend. ATM has, nonetheless, found a specialty with some ISPs and is likewise usually utilized as a system spine. It gives usefulness in that it joins the advantages of both bundle switching and circuit switching. ATM is a bundle switching innovation that gives move paces running from 1.544 Mbps to 622 Mbps. It is appropriate for an assortment of information types, for example, voice, information, and video. Utilizing fixed-length bundles, or cells, that are 53 bytes in length, ATM can work more productively than variable-length-parcel bundle switching innovations, for example, Frame Relay. Having a fixedlength bundle enables ATM to be concerned distinctly with the header data of every parcel. It doesn't have to peruse all of a bundle to decide its start and end. ATM's fixed cell length likewise makes it effectively versatile to different advancements as they create. Every cell has 48 bytes accessible for information, with 5 bytes held for the ATM header. ATM is a circuit-based system innovation since it utilizes a virtual circuit to interface two organized gadgets. Like Frame Relay, ATM is a circuit-based arrange innovation that likewise utilizes PVCs and SVCs. PVCs and SVCs were talked about in the former area. ATM is good with the most broadly utilized and actualized organizing media types accessible today, including single-mode and multimode fiber, coaxial link, unshielded wound pair, and protected turned pair. Even though ATM can be utilized over different media, the restrictions of a portion of the media types settle on them illogical decisions for sending in an ATM organize. ATM can likewise work over other media, including FDDI, T1, T3, SONET, OC-3, and Fiber Channel. DSL Internet Access Computerized supporter line (DSL) is an Internet access strategy that uses a standard telephone line to give fast Internet access. DSL is most normally connected with rapid Internet access. Since it is a moderately modest Internet access, it is frequently found in homes and private ventures. With DSL, an alternate recurrence can be utilized for advanced and simple signals, which implies that you can chat

on the telephone while you upload information. For DSL administrations, two kinds of frameworks exist: Hilter Kilter computerized supporter line (ADSL) and a high-rate advanced endorser line (HDSL). ADSL gives a high information rate in just a single bearing. It empowers quick download speeds, however altogether slower upload speeds. ADSL is intended to work with existing simple telephone utility (POTS) administration. With quick download speeds, ADSL is appropriate for home-use Internet access where uploading a lot of information is certifiably not a regular task. Rather than ADSL, HDSL gives a bidirectional high-information rate administration that can oblige administrations, for example, videoconferencing that require high information rates in the two headings. A variation of HDSL is high-rate computerized endorser line (VHDSL), which gives an HDSL administration at very high information move rates. Link Broadband Link broadband Internet access is a consistently-on-the-Internet access technique accessible in zones that have advanced digital TV. Digital Internet access is appealing to numerous independent companies and home office clients since it is both reasonable and solid. Most link suppliers don't limit how much utilization is made of the access. However, they do control the speed. The Network is accomplished by utilizing a gadget called a link modem. It has a coaxial association for interfacing with the supplier's outlet and an unshielded wound pair (UTP) association for interfacing straightforwardly to a framework or a hub, switch, or switch. Link suppliers frequently supply the link modem, with a month to month tenant contract. Many link suppliers offer free or minimal effort establishment of satellite Internet administration, which incorporates introducing a system card in a PC. Most link modems offer the ability to help a higher-speed Ethernet association for the home LAN. The real speed of the association can shift to some degree, contingent upon the use of the common link line in your general vicinity. Perhaps the greatest hindrance of link access (by DSL suppliers, at any rate) is that you share the accessible data transmission with every other person in your link zone. Accordingly, during busy times, the execution of a link connection may be less accurate than in low-use periods. In neighborhoods, occupied occasionally at night, and the end of the week, and especially directly after school, this can cause problems. All in all, however, execution with link frameworks is great, and in low-utilization periods, it very well may be described as quick. Dial-up Dial-up is one of the basic procedures by which a person can connect to the Internet. When you do not have access to broadband Internet, this service can help you connect to the internet. Although the service speed is slow, you will only require a telephone line and a modem to use this service. This method is also called the plain old telephone system to access the internet. It uses the same line, which is used to run the telephone line in the home/office. Most people in the world rely on the 4G/LTE or broadband services to connect to the internet, but there are still some people who use this service to connect their devices to other networks, i.e. the Internet.

Connection via dial-up is a simple process. You will only need two basic things - dial-up access, which will be provided by an internet service provider, and the modem. The modems are used to convert digital signals of the computer to the analog signals so that they can travel through the phone line. The client system may have an external or internal modem. The internal modems are installed on the COM port and need to be configured so that they do not encounter issues communicating with other devices in the computer. On the other hand, the external modems are easy to install, and you can also identify the issues easier in an external modem. The second thing that you require for the dial-up Internet to work is the ISP account. You can contact your Internet service provider and ask for a dial-up ISP account, and they will share the plans and packages that they offer in your area. Most ISPs provide small web spaces and email accounts to the users, as well. The maximum speed that you will get with the dial-up Internet is 56 Kbps. Even though it is an old service, you, as a network administrator, need to be well acquainted with the dial-up ISP packages of the area. There are some free dial-up services, and with thorough research, you will be able to find one that meets your needs and requirements. You might have to face a lot of issues when you are planning to connect to the Internet with the help of dial-up service. The Public Switched Telephone Network You can consider the public switched telephone network as a collection of the world’s telephone lines. When you are connecting two points, you will need a telephone, networking equipment, and the cable. Due to the advancement in technology, we now see the digital PSTNs everywhere. There are some analog lines as well that help homes to connect to the phone exchanges. Modems are installed in the home so that the digital signals can be converted to the analog streams and can be sent effectively. When the Internet connections are limited, the companies use the PTSN and ISDN to send and receive the data. You will only get a speed of 56 Kbps and 128 Kbps on the ISDN connection, and these are fairly low-speed rates as compared to broadband services. Using the PSTN is preferred by companies who need to send small amounts of data and do not want to rely on other remote access or the internet. Satellite Internet Access The DSL and cable broadband is not a luxury for people, but for the places where the cables are not laid, these services cease to exist. For the places where cable broadband is not available, Internet via satellite is a primary option. The Internet via satellite is much faster than the dial-up connection, and satellite Internet offers an always-on connection to the places where broadband is not accessible. You will have to face the issues of high cost and latency when you are connected to satellite Internet. Portability is one of the best features of satellite internet, but you will get different data rates, and the cost will also be high. This is the clear choice for people who travel for business and need to go to far-off places where broadband is not accessible. Many companies offer satellite internet services, and most of them target the business sector. A simple search on the internet will also reveal the companies that offer these services in the private market. You may be asked about satellite internet specifications and the types of satellite internet in the Network+ exam. There are two main types of satellite internet services on the market, which are twoway systems and one-way systems. In a two-way system, some data paths support the upstream and

downstream. You will need a satellite card and the satellite dish to ensure the bidirectional communication will occur. On the other hand, with the one-way system, the outgoing data is sent with the help of the phone line, and the data is returned via the satellite link. You will also require the satellite dish to be installed and the satellite card for the communication to begin. The home satellite systems use a modem for communication with the satellite, and you will usually get more download speed than upload speed. The modem will be used to send the uplink traffic, and the satellite link will be responsible for downloads. Many factors can affect the speed of the satellite internet. You need to know that the propagation time is high when it comes to satellite internet, and you need to consider this when you are using satellite internet for business operations. Wireless Internet Access Wireless internet access has become very common in the 21 st century, and now you can take your laptop with you anywhere without facing the issues of internet connectivity. Even the coffee shop and restaurants offer free wireless internet access to their customers, and communication with any person on the internet has become direct and simple. Wireless internet access can be utilized with the help of access points known as hotspots. The internet service providers usually install them. These hotspots can provide internet access for different devices such as laptops, cell phones, and other handheld devices. You can find the hotspots in different locations such as airports, restaurants, and coffee shops. It may be required by the client to install the software on their device, and that is responsible for security and billing purposes. Some hotspots can be connected with only the SSID. Hotspots are becoming an effective marketing tool, and many companies provide free hotspots to run ads on client devices. To work appropriately, a system must have endpoints. These endpoints stop the signal and keep it from staying live past the time that it is needed. For the test, CompTIA needs you to be comfortable with various end related subjects, which are all talked about in the exam. Understanding these WAN and internet connectivity options, you will be able to know how networking is important and how you can alter and troubleshoot to ensure the constant flow of information.

Chapter 8 Wireless Solutions

Understanding wireless networking solutions is very important when you are planning to pass the CompTIA Network+ exam. Networks come in different shapes and sizes. Some are made up of wired and wireless networking devices. Wireless home internet is an example of wireless networking and has been used by the general public for years. With the help of wireless networking, people can interact with other networks with the help of radio waves instead of wires. You only need to stay within the range of wireless access points to access the internet and other networking services. In this chapter, you will get to understand wireless networks and how they are different from each other. We will also briefly discuss their roles, and this will help you to pass the CompTIA Network+ exam. Wireless Frequencies and Channels Radio frequencies play an important role in wireless communication. The band of radio frequencies is known as channels, and the wireless standards use these channels to send and receive the data packets. You need to be aware of the radio frequencies and the overlapping channels when you are troubleshooting the wireless network. All the access points are closely packed in the network, and even two separate access points on the same floor can overlap each other’s signals. Cellular Access We cannot imagine our world without cell phones. Possibly one-third of the world’s communication is made via cellphones. Code division-multiple access was used for the global system of mobile communication as an application to provide cell phone coverage. With the advancement in research, multiple user interfaces were designed and implemented on the market. Some of the individual methods for cellular communication include Edge, 3G, LTE/4G. Speed, Bandwidth, and Distance When wireless transmission is discussed, a person needs to differentiate between the data rate and the throughput. One might get confused between the terms as they are closely related, but they are entirely different things. Every wireless networking device and technology operates at its speed. The 802.11n has a speed of 600 Mbps, and the 802.11ac has a theoretical speed of 1300 Mbps. Many things can alter the speed and become a hindrance to data transmission. The speed of the wireless device also depends upon the number of users on the network. When more users are on the network, there will be more collisions, and this will impact the transmission speed. The wireless signals will also disrupt when they pass through a wall and doors. In practical terms, the wireless transmission is normally one-third or one-half of the theoretical maximum. Channel bonding is the phenomenon by which you can use two channels at the same time. With the help of channel bonding, you can increase the speed of the data transmission. Antenna Ratings and Coverage

When the antenna is not providing enough coverage, and the data speed is also lackluster, you might be thinking of changing the antenna, but how will a network administrator know which antenna will work best? You will know that a general antenna emits the signals in the form of a sphere. A wireless antenna that has a 15dBi is believed to be 15 times stronger than the isotropic antenna. You need to find out the general coverage that you need and choose the antenna with your desired output range. Always remember that the higher gain value means that the antenna will send and receive signals at a faster pace. Antennas that cover the entire 360 space are called omnidirectional antennas, and these antennas are well suited for home and office spaces. On the other hand, the unidirectional antennas are the ones that only send signals in a particular direction. These antennas are used at the connection points and can help you send and receive signals from one office to another. Communication Between Wireless Devices If you need to understand how the network devices work, you need to learn how the communication between these devices occurs. In a simple wireless network design, you will see two key parts. One is the access points, which will be working as a bridge and the wireless client, which can be referred to as the station. When the access point and the client start talking to each other, the transmission between the devices will occur. Communication between devices will occur when the wireless adapter is turned on. The device will start the scan for the wireless frequencies so that it can be hooked to the access point. When there are different access points, the user can choose the access point to which he/she wants to connect. You can also configure the SSID so that the device can be set up automatically. When the authentication is done, the client will then move to the frequencies of the wireless router/access point, and the data transmission can start. If the signal strength of the access point is weak or there is too much interference between them, then the client will start looking for a new wireless access point to connect to. This process is also known as re-association. On the access point, the authentication can be put to open connectivity, or you can also use the keyed security code to authenticate the network. All the security requirements need to be completed and only in this way can the transmission occur. Different requirements need to be set on the client and access point before the communication can begin and some of them are listed below: Wireless Channel: as you already know, the radio frequency channels are very important for the communication between the networks. The ray of frequency bands, also known as channels, is used to communicate wirelessly. Different routers and technologies have different frequency levels in which they operate. For example, the 802.11b operates from 2.4 GHz to 2.497 GHz. Security Features: There are two methods by which the IEEE 802.11 provides security. These methods are encryption and authentication. Authentication is used to verify the clientside. As far as the encryption services are concerned, they should have the same values on the AP and the client if they want to establish the communication. Service Set Identifier: You require an SSID whether your wireless network is using the ad hoc mode or the infrastructure mode. The SSID is a unique key that is used by the client to connect to the base station. The SSID can be adjusted from the client system, and the access point can only communicate via this SSID. The SSID can be considered as a simple

password that is required to connect to a wireless access point. Understanding and learning these points will help to get a grasp on the problems that are faced during network troubleshooting. Troubleshooting Wireless Networks There are many main causes behind the poor transmission of data among wireless devices. In the previous chapters, we have discussed problems such as latency and jitter, and you will get a clear understanding of how these things affect communication and how you can avoid them. In the Network+ exam, you will be asked to identify the issues of the network router or access point, and studying the common problems of the router and wireless connections can help you solve the questions. Some issues that the networking professional faces when it comes to wireless communication are: Wireless Enabled: On many computers, it is very easy to turn the wireless communication on and off. A simple press of the button can turn off the wireless connection. This is a very simple problem, and you will be able to identify the issue. On some models, you will find a small light that will indicate whether the wireless connection is on or off. Look for that light on the system or check the utility bar on the desktop to know whether the wireless connection is enabled or not. Signal Loss: attenuation or signal loss can happen due to an obstruction such as a wall or doors between the access point and the client device. You need to examine the signal to noise ratio of the signals, and this will tell you if the background noise is interfering with the signals. The signal loss can also occur due to saturation with the bandwidth or the device. Untested Updates: You need to make sure that you do not apply untested updates on the network. The access point updates should not be applied and should always be tested in the non-environmental scenarios before applying them to live machines. Auto Transfer Rate: all the networking devices are programmed to pick up the fastest and strongest wireless connection. When you are facing connectivity issues, you need to lower the transfer rate from the devices to ensure that you have a stable connection with the access point. Wrong Wireless Standard: you need to make sure that the wireless devices are set to have the same standard that supports the rates you are looking for. Wrong wireless standards can cause hindrance in the connection. Access Point Placement: When you are facing low signal strength, you need to make sure that you change the position of the access point/router. Even moving the access point, a few feet can help you to get a strong signal. The signals bounce off the reflective surfaces, and you can place the device in such a position that it deflects from the surfaces, and high signal frequency can be achieved. Antenna: the antenna that comes with the wireless device is not powerful enough to support highspeed wireless communication. If you want to increase the signal strength, you can buy the aftermarket antennas. You need to make sure that the antenna you are buying is compatible with the wireless devices so that no incompatibility issues arise. Conflicting devices: if other devices are using the same frequency, then this can cause interference in the connectivity.

Wireless Channels: You need to make sure that the devices are communicating on the same channel. If the connection is unreliable, you need to make sure that you shift to another channel. Environmental Issues: The radio frequencies will become weak if they need to pass through the metal windows and concrete walls. You need to find an optimal place for the networking device so that the signals are not disrupted. SSID: SSID or security key needs to be the same on the access point and client device. If the network ID and the SSID is not the same, then you will not be able to communicate between the networks. Protocol Issues: the IP information will not be received if there are protocol issues between the networking device and the client device. Factors Affecting the Wireless Signals The wireless signals need to be transferred in the atmosphere, and this means that they can be easily disrupted. As a good network administrator and a student who needs to pass the CompTIA Network+ exam, you need to make sure that you study the factors which affect signals and also learn about their possible solutions. Studying this section will help you to pass the Network+ objective 5.4. Interference can make the wireless signals weak, and you need to make sure that the interference is minimal so that the data is transmitted in a timely fashion. A good network administrator needs to understand the wireless interference and plan the wireless network in such a way that wireless interference is minimal. There will always be some interference in the network, and there are some tricks to minimize this issue. Wireless communication is completed with the help of radio frequencies, and there must be an unobstructed transmission path for the network to send and receive the data effectively.

Chapter 9 Cloud Visualization and Computing

The term cloud virtualization is used a lot these days, even by individuals who have no clue what it implies. Being a Network+ applicant, it is imperative to be versed in the meanings of cloud processing, and virtualization. This section centers upon the meanings of cloud processing and virtualization at the level you have to know them for the Network+ test. On the off-chance that you need to go further with the innovation, consider the recently made Cloud+ accreditation from CompTIA. Private Cloud A private cloud is characterized as follows: "The cloud framework is provisioned for selective use by a solitary association involving numerous shoppers (e.g., specialty units). It might be possessed, overseen, and worked by the association, an outsider, or a mix of them, and it might exist on or off premises." Under most conditions, a private cloud is claimed by an association, and both the supplier and the shopper use it. It has a security-related bit of leeway in not expecting to put its information on the Internet. Public Cloud A public cloud is characterized as follows: "The cloud framework is made for open use by the overall population. It might be possessed, overseen, and worked by a business, a college, or government association, or a blend of them. It exists on the premises of the cloud supplier." Under most conditions, a public cloud is possessed by the cloud supplier, and it utilizes a compensation as-you-go model. A genuine case of a public cloud is webmail or online record sharing/joint effort. Hybrid Cloud A hybrid/mixed cloud is characterized as follows: "The cloud framework is a creation of at least two particular cloud foundations (private, network, or public) that stay remarkable elements, yet are bound together by institutionalized or restrictive innovation that empowers information and application versatility (e.g., cloud blasting for burden adjusting between clouds)." Connectivity Models Most cloud suppliers offer various strategies that customers can utilize to interface with them. It is significant, before taking an interest in the framework, to check with your supplier and see what techniques it prescribes and supports. One of the most widely recognized is to utilize an IPsec, equipment VPN association between your network(s) and the cloud suppliers. This technique offers the capacity to have an overseen VPN endpoint that incorporates mechanized multi-data focus excess

and failover. A devoted direct association is another, more straightforward strategy. You can consolidate the devoted system connection(s) with the equipment VPN to make a mix that offers an IPsec-encoded private association while additionally decreasing system costs. Amazon Web Services (AWS) is one of the most famous cloud suppliers available. They permit the two availability strategies talked about (calling the devoted association "AWS Direct Connect") and various others that are varieties, or blends, of these two. Security Implications and Considerations Security is one of the most significant issues to examine with your cloud supplier. Cloud processing holds incredible guarantees concerning versatility, cost investment funds, quick organization, and strength. Similarly, as with any innovation where so much is taken out of your control, however, dangers are included. Each hazard should be considered cautiously to recognize problems and to solve them. Normally, the obligation of both the association and the cloud supplier shift contingent upon the administration model picked. At the end of the day, the association is responsible for the security and protection of the redistributed information. Programming and administration that is a bit much for personal execution should be evacuated or, if nothing else, disabled. Patches and firmware updates ought to be kept current, and log records ought to be painstakingly observed. You should discover the vulnerabilities in the usage before others do and work with your administration provider(s) to close any openings. With regards to information stockpiling on the cloud, encryption is perhaps the ideal approach to secure it (shielding it from being of significant worth to unapproved people), and VPN steering and sending can help. Reinforcements ought to be performed routinely (and scrambled and put away in a safe place), and access control should be treated as a need. Cloud computing gives incredible guarantees with regards to adaptability, cost savings, fast organization, and strengthening. Similarly, as with any innovation where so much is taken away from your control, notwithstanding, dangers are inevitable. Each hazard needs to be considered and thoroughly investigated to distinguish approaches to help alleviate them. Information isolation, for instance, can help decrease a portion of the danger related to multitenancy. Basic virtual parts incorporate virtual organize interface cards (VNICs), virtual switches and switches, shared memory, virtual CPUs, and capacity (shared or bunched). In the accompanying areas, we take a gander at a portion of these parts used to make the virtual condition. Virtual Routers and Switches Just like physical switches build up correspondence by keeping up tables about goals and neighborhood associations, a virtual switch works likewise, yet is considered as programming. Keep in mind that a switch contains data about the frameworks associated with it and where to send demands if the goal isn't known. These steering tables develop as associations are made through the switch. Directing can happen inside the system (inside) or outside it (outside). The courses themselves can be arranged as static or dynamic.

A virtual switch, likewise, is a product program that permits one virtual machine (VM) to speak with another. The virtual switch enables the VM to utilize the equipment of the host OS (the NIC) to interface with the Internet. Switches are multiport gadgets that improve organizational effectiveness. A switch commonly contains a modest quantity of data about frameworks in a system—a table of MAC delivers instead of IP addresses. Switches improve organization productivity over switches because of the virtual circuit capacity. Switches likewise improve the organization of security because the virtual circuits are increasingly hard to inspect with system screens. The switch keeps up restricted steering data about hubs in the interior system, and it enables associations with frameworks, for example, a center point or a switch. Virtual Firewall A virtual firewall (VF) is either a system firewall administration or an apparatus running totally inside the virtualized condition. Notwithstanding which execution, a VF fills a similar need to a physical one: parcel sifting and observing. The firewall can likewise run in a visitor OS VM. One key to a VF is to not ignore the commitment from Network Address Translation (NAT). This enables an association to show a solitary location (or set of addresses) to the Internet for all PC associations—it goes about as an intermediary between the neighborhood (which can be utilizing private IP addresses) and the Internet. NAT adequately conceals your system from the world, making it a lot harder to figure out what frameworks exist on the opposite side of the switch. Storage Area Networks Concerning information stockpiling in the cloud, encryption is perhaps the ideal approach to secure it (shielding it from being of significant worth to unapproved viewers), and VPN steering and sending can help. Reinforcements ought to be performed routinely (and scrambled and put away in safe places), and access control should be treated as a need. The purchaser holds a definitive duty regarding consistency. Per NIST SP 800-144, "The principal issue focuses on the dangers related to moving significant applications or information from inside the bounds of the association's computing focus to that of another association (i.e., a public cloud), which is promptly accessible for use by the overall population. The obligations of both the association and the cloud supplier fluctuate contingent upon the administration model. Decreasing cost and expanding effectiveness are essential inspirations for moving towards a public cloud, yet giving up duty regarding security ought not to be. At least, the association is responsible for the decision of the public cloud and the security and protection of the redistributed assistance." Mutual capacity should be possible on capacity territory systems (SANs), organize appended capacity (NAS, etc. the virtual machine sees just a "physical plate." With grouped capacity, you can utilize numerous gadgets to expand execution. A bunch of advancements exists in this domain, and what follows below are those that you have to know for the Network+ test. iSCSI The Small Computer System Interface (SCSI) standard has, for some time, been the language of capacity. Web Small Computer System Interface (iSCSI) extends this through Ethernet, enabling IP to

be utilized to send SCSI directions. Consistent unit numbers (LUNs) originate from the SCSI world and extend to being used as a kind of identifier for gadgets. The two NAS and SAN use "focuses on" that hold up to eight gadgets. Utilizing iSCSI for a virtual situation gives clients the advantage of a record framework without the trouble of setting up a Fiber Channel. Since iSCSI works both at the hypervisor level and in the visitor working framework, the guidelines that administer the size of the segment in the OS are utilized instead of those of the virtual OS (which are typically progressively prohibitive). Probably the greatest issue with systems administration is that information of different sizes is packed into parcels and sent over the medium. Each time this is done, headers are made (more information to process), alongside any filler required, making extra overheads. To get around this, the idea of enormous casings is utilized to take into account huge Ethernet outlines; by sending a ton of information without a moment's delay, the number of parcels is decreased, and the information sent is less processor escalated. Fiber Channel and FCoE Rather than utilizing a more seasoned innovation and attempting to cling to inheritance benchmarks, Fiber Channel (FC) is a choice giving a more elevated level of execution than all else. It uses FCP, the Fiber Channel Protocol, to do what should be done, and Fiber Channel over Ethernet (FCoE) can be utilized in rapid (10 GB and higher) executions. The huge bit of leeway of Fiber Channel is its adaptability. FCoE embodies FC over the Ethernet bits of availability, making it simple to include in a current system. In that capacity, FCoE is an augmentation to FC planned to broaden the versatility and proficiency related with Fiber Channel. System Attached Storage Capacity is constantly a major issue, and the best answer is a SAN. Shockingly, a SAN can be expensive and hard to execute and keep up. That is where the network attached stockpiling (NAS) comes in. NAS is simpler than SAN and utilizations TCP/IP. It offers document-level access, and a customer considers them to be capable as a record server.

Chapter 10 Network Operations

In this chapter, you will have the information about the documentation and the other tools that are used to monitor and organize the networks. Network Administers have a few day by day assignments, and new ones frequently yield up. In this condition, errands, for example, the documentation now and again tumble to the foundation. Significantly, you comprehend why executives need to invest important energy composing and assessing documentation. Having a well-recorded system offers various points of interest: Investigating: When something turns out badly on the system, including the wiring, state-of-the-art documentation is an important reference to control the investigating exertion. The documentation sets aside your cash and time in disconnecting potential issues. Preparing new administrators: In many system conditions, new heads are contracted, and old ones leave. In this situation, documentation is basic. New directors don't have the opportunity to attempt to make sense of where cabling is run, what cabling is utilized, potential issue spots, and that's only the tip of the iceberg. Exceptional data helps new overseers rapidly observe the system format. Working with contractual workers and advisors: Consultants and temporary workers every so often may need to visit the system to make suggestions for the system or to include wiring or different segments. In such cases, forward-thinking documentation is required. On the off chance that documentation is missing, it would be significantly harder for these individuals to carry out their responsibilities, and additional time and cash would almost certainly be required. Stock the executives: Knowing what you have, where you have it, and what you can go to on account of a crisis is both productive and supportive. Quality arrange documentation doesn't occur coincidentally; rather, it requires cautious arranging. When making system documentation, you should remember who you are making the documentation for and that it is a specialized instrument. Documentation is utilized to take specialized data and present it in a way that another person to the system can get it. When arranging system documentation, you should choose what you have to record. Wiring design and rack graphs: Network wiring can be befuddling. A lot of it is covered up in dividers and roofs, making it difficult to tell where the wiring is and what kind is utilized on the system. This makes it basic to keep documentation on system wiring cutting-edge. Outline what is on each rack and any surprising setups that may be utilized. IDF/MDF documentation: It isn't sufficient to show that there is a halfway dissemination outline (IDF) as well as a primary circulation outline (MDF) in your structure. You have to completely archive any, and each unattached or divider mounted rack and the links running among them and the end client gadgets.

Server setup: A solitary system ordinarily utilizes different servers spread over a huge geographic zone. Documentation must incorporate schematic drawings of where servers are situated on the system and the administrations each gives. This incorporates server work, server IP address, working framework (OS), programming data, and the sky is the limit from there. You have to archive all the data you have to oversee or regulate the servers. System gear: The equipment utilized on a system is designed with a certain goal in mind—with conventions, security settings, consents, and the sky is the limit from there. Attempting to recollect the future is a troublesome undertaking. Having state-of-the-art documentation makes it simpler to recuperate from a disappointment. System design, execution baselines, and key applications: Documentation likewise incorporates data on all present system arrangements, execution baselines taken, and key applications utilized on the system, for example, forward-thinking data on their updates, merchants, introduce dates, and that's only the tip of the iceberg. Point by point record of system administrations: Network administrations are a key fixing in all systems. Administrations, for example, Domain Name Service (DNS), Dynamic Host Configuration Protocol (DHCP), Remote Access Services (RAS), and more, are a significant piece of documentation. You ought to portray in detail which server keeps up these administrations, the reinforcement servers for these administrations, upkeep plans, how they are organized, etc. Standard working methodology/work guidelines: Finally, documentation ought to incorporate data on organized approach and techniques. This incorporates numerous components, running from who can and can't get to the server room, to organize firewalls, conventions, passwords, physical security, cloud computing use, cell phone use, etc. Baselines and their Documentation Baselines have a vital impact on organizing documentation since they let you screen the system's general execution. In straightforward terms, a gauge is a proportion of execution that demonstrates how hard the system is functioning and where system assets are spent. The reason for a pattern is to give a premise of examination. For instance, you can look at the system's presentation results taken in March to results taken in June, or starting with one year then onto the next. All the more generally, you would look at the standard data when the system is having an issue to data recorded when the system was working with more noteworthy effectiveness. Such correlations assist you with deciding if there has been an issue with the system, how huge that issue is, and even where the issue lies. To be of any utilization, baselining is anything but a one-time task; rather, baselines ought to be taken occasionally to give an exact correlation. You should take an underlying pattern after the system is set up and operational, and afterward again when significant changes are made to the system. Regardless of whether no progressions are made to the system, intermittent baselining can demonstrate value as a way to decide if the system is as yet working effectively. All system working frameworks (NOSs), including Windows, Mac OS, UNIX, and Linux, have worked in help for system observing. Also, some outsider programming bundles are accessible for point by point system observing. These framework checking devices gave in a NOS give you the way to take execution baselines, both of the whole system or for an individual section inside the system.

Due to the various elements of these two baselines, they are known as a framework pattern and a segment standard. To make a system benchmark, arrange screens give a graphical showcase of system measurements. System overseers can pick an assortment of system estimations to follow. They can utilize these insights to perform routine investigating errands, for example, finding a failing system card, a brought down server, or a disavowal of-administration (DoS) assault. Gathering system insights is a procedure called catching. Directors can catch insights into all components of the system. For benchmark purposes, one of the most widely recognized measurements to screen is transfer speed use. By checking on data transfer capacity measurements, managers can see where the greater part of system transmission capacity is utilized. At that point, they can adjust the system for transmission capacity use. On the off chance that a specific application utilizes an excessive amount of data transfer capacity, heads can effectively control its transmission capacity use. Without looking at baselines, notwithstanding, it is hard to perceive what is typical system data transfer capacity use and what is strange. Approaches, Procedures, Configurations, and Regulations Well-working systems are portrayed by reported strategies, techniques, arrangements, and guidelines. Since they are one of a kind to each system, approaches, techniques, arrangements, and guidelines ought to be unmistakably archived. Arrangements By definition, arrangements allude to an association's reported guidelines about what can anyone do, not done, and why. Approaches manage who can and can't get to specific system assets, server rooms, reinforcement media, and that's only the tip of the iceberg. Even though systems may have various approaches relying upon their needs, some normal arrangements incorporate the accompanying: System use strategy: Defines who can utilize arrange assets, for example, PCs, printers, scanners, and remote associations. What's more, the use strategy directs what should be possible with these assets after they are gotten to. No outside frameworks will be organized without authorization from the system executive. Web use approach: This strategy determines the standards for Internet use at work. Normally, use ought to be centered around business-related undertakings. Coincidental individual use is permitted during indicated times. Bring your gadget (BYOD) strategy: This arrangement determines the standards for representatives' by and by possessed cell phones (cell phones, PCs, tablets, etc.) that they bring into the work environment and use to interface with favored organization data and applications. Two things the strategy needs to address are onboarding and offboarding. Onboarding the cell phone is the methodology experienced to prepare it to go on the system (checking for infections, including certain applications, etc.). Offboarding is the way toward expelling organization claimed assets when it is never again required (regularly finished with a wipe or plant reset). Cell phone the board (MDM) and versatile application the board (MAM) instruments (generally outsider) are utilized to control

and use both worker claimed, and the organization-owned cell phones and applications. Email use arrangement: Email must pursue a similar set of accepted rules true to form in some other type of composed or up close and personal correspondence. All messages are organization property and can be gotten to by the organization. Individual messages ought to be promptly erased. Individual programming strategy: No outside programming ought to be introduced to arrange PC frameworks. The system director must endorse all product establishments. No product can be replicated or expelled from a site. Authorizing limitations must be clung to. Secret word strategy: Detail how regularly passwords must be changed, and the base level of security for each (number of characters, utilization of alphanumeric character set, etc.). Client account approach: All clients are answerable for keeping their secret phrase and record data mystery. All staff are required to log off and here and there lock their frameworks after they complete the process of utilizing them. Endeavoring to sign on to the system with another client record is viewed as a genuine infringement. Worldwide fare controls: various laws and guidelines oversee what can and can't be traded with regards to programming and equipment to different nations. Representatives should play it safe to ensure they are holding fast to the stated purpose of the law. Information misfortune counteractive action: Losses from representatives can immediately place an organization in the red. It ought to be comprehended that each representative must ensure every single preventable misfortune is forestalled. Occurrence reaction approaches: When an episode happens, all representatives ought to comprehend they must be vigilant for it and report it promptly to the proper party. Non-Disclosure Agreements (NDAs): NDAs are the oxygen that numerous organizations need to flourish. Representatives ought to comprehend the significance of them to proceeded with business activities and consent to tail them exactly, and soul, of the law. Wellbeing systems and arrangements: Safety is everybody's the same old thing, and all workers should realize how to carry out their responsibility in the most secure way while moreover paying special mind to different representatives and clients the same. Proprietorship arrangement: The organization possesses all information, including clients' email, voice message, and Internet use logs, and the organization maintains whatever authority is needed to examine these whenever. A few organizations even venture to such an extreme as controlling how a lot of individual information can be put away on a workstation. This rundown is only a depiction of the strategies that guide the conduct for managers and system clients. System arrangements ought to be unmistakably archived and accessible to organize clients. Frequently, these arrangements are evaluated with new staff individuals or new overseers. As they are refreshed, they are rereleased to organize clients. Approaches are consistently checked on and refreshed. System procedures vary from strategies in that they depict how assignments are to be performed. For

instance, each system director has reinforcement procedures indicating the hour of day reinforcements are done, how regularly they are done, and where they are put away. A system is brimming with various procedures for useful reasons and, maybe progressively significant, for security reasons. Administrators must know about a few procedures when at work. The number and accurate kind of procedures rely upon the system. The system's general objective is to guarantee consistency and guarantee that system undertakings pursue a structure. Without this procedural system, various heads may move toward undertakings in an unexpected way, which could prompt disarray on the system. System procedures may incorporate the accompanying: Reinforcement procedures: Backup procedures indicate when they are to be performed, how regularly a reinforcement happens, who does the reinforcement, what information is to be supported up, and where and how it will be put away. System overseers ought to deliberately pursue reinforcement procedures. Procedures for including new clients: When new clients are added to a system, executives ordinarily need to pursue certain rules to guarantee that the clients approach what they need, yet no more. This is known as the standard of least benefit. Special client understanding: Administrators and approved clients who can adjust secure designs and perform undertakings, for example, account arrangement, account end, account resetting, examining, etc. should be held to elevated expectations. Security procedures: Some of the more basic procedures include security. Security procedures are various, however, may incorporate indicating what the chairman must do if security ruptures happen, security observing, security announcing, and refreshing the OS and applications for potential security openings. System observing procedures: The system should always be checked. This incorporates following such things as transmission capacity utilization, remote access, client logons, and the sky is the limit from there. Programming procedures/framework life cycle: All products must be intermittently checked and refreshed. Archived procedures direct when, how frequently, why, and for whom these updates are finished. At the point when resources are discarded, resource transfer procedures ought to be pursued to report and log their evacuation appropriately. Procedures for announcing infringement: Users don't generally pursue illustrated arrange approaches. This is the reason archived procedures should exist to deal with the infringement appropriately. This may incorporate a verbal admonition upon the main offense, trailed by composed reports, and record lockouts from that point. Remote-access and system affirmation procedures: Many workers remotely get to the system. This remote access is conceded and kept up utilizing a progression of characterized procedures. These procedures may manage when remote clients can get to the system, to what extent they can get to it, and what they can get to. System confirmation control (NAC)— additionally alluded to as system access control—figures out who can jump on the system and is generally founded on 802.1X rules.

Change Management Documentation Change the board procedures may incorporate the accompanying: Record explanation behind a change: Before rolling out any improvement whatsoever, the main inquiry to pose is the reason. A change mentioned by one client might be founded on a misconception of what innovation can do, might be cost restrictive, or may convey an advantage not worth the endeavor. Change demand: An official solicitation ought to be logged and followed to check what can anyone do what has been finished. Inside the domain of the change, solicitation ought to be the design procedures to be utilized, the rollback procedure that is set up, potential effect recognized, and a rundown of the individuals who should be advised. Endorsement process: Changes ought not to be affirmed based on who makes the most commotion, but instead who has the most legitimized reasons. An official procedure ought to be set up to assess and support changes preceding activities being attempted. The endorsement should be possible by a solitary executive or a proper advisory group dependent on the size of your association and the extent of the change being affirmed. Support window: After a change has been endorsed, the following inquiry to address is the point at which it is to occur. Approved personal time ought to be utilized to make changes to generation conditions. Notice of progress: Those influenced by a change ought to be advised after the change has occurred. The warning ought not to be simply of the change yet ought to incorporate all effect to them and distinguish who they can go to with questions. Documentation: One of the last advances is consistently to archive what has been finished. This ought to incorporate documentation on organize designs, increments to the system, and physical area changes. These speak to only a couple of the procedures that managers must pursue at work. It is pivotal that every one of these procedures is all around recorded, available, assessed, and refreshed as should have been successful. Setup Documentation One other basic type of documentation is setup documentation. Numerous professionals think they would always remember the design of a switch, server, or switch. However, it regularly occurs. Even though it is frequently a difficult, tedious assignment, reporting the system equipment and programming designs is basic for proceeded to arrange usefulness. Backups Full Backups The favored technique for backup is the full backup strategy, which duplicates all documents and indexes from the hard plate to the backup media. There are a couple of reasons for what reason doing a full backup isn't constantly conceivable. First among them is likely the time engaged with playing

out a full backup. Contingent upon the measure of information to be sponsored up, be that as it may, full backups can take a very significant time-frame and can utilize broad framework assets. Contingent upon the design of the backup equipment, this can extensively hinder the system. Also, a few conditions have more information than can fit on a solitary medium. This makes doing a full backup cumbersome because somebody should be there to change the media. The favorable principle position of full backups is that a solitary arrangement of media holds every one of the information you have to reestablish. On the off chance that a disappointment happens, that solitary arrangement of media ought to be all that is expected to recover all information and framework data. The aftereffect of this is that any interruption to the system is extraordinarily decreased. Shockingly, its quality can likewise be its shortcoming. A solitary arrangement of media holding an association's information can be a security hazard. On the off chance that the media were to fall into inappropriate hands, every one of the information could be reestablished on another PC. Utilizing passwords on backups and utilizing a safe offsite and on-location area can limit the security hazard. Differential Backups Organizations that need more time to finish a full backup day by day can utilize the differential backup. Differential backups are quicker than a full backup since they backup just the information that has changed since the last full backup. This implies if you do a full backup on a Saturday and a differential backup on the next Wednesday, just the information that has changed since Saturday is sponsored up. Reestablishing the differential backup requires the last full backup and the most recent differential backup. Differential backups comprehend what documents have changed since the last full backup since they utilize a setting called the chronicle bit. The file bit banners record things that have changed or have been made and recognizes them as ones that should be sponsored up. Full backups don't fret about the chronicle bit since all records are upheld up, paying little mind to date. A full backup, nonetheless, does clear the chronicle bit after information has been upheld up to stay away from future perplexity. Differential backups see the document bit and use it to figure out which records have changed. The differential backup doesn't reset the file bit data. Steady Backups A few organizations have a limited measure of time they can dispense to backup methodology. Such associations are probably going to utilize gradual backups in their backup methodology. Steady backups spare just the documents that have changed since the last full or gradual backup. Like differential backups, gradual backups utilize the file bit to figure out which documents have changed since the last full or steady backup. In contrast to differentials, nonetheless, gradual backups clear the document bit, so records that have not switched are not upheld up. The quicker backup time of gradual backups includes some significant downfalls—the measure of time required to reestablish. Recouping from a disappointment with gradual backups requires various arrangements of media—all the steady backup media sets and the one for the latest full backup. For instance, if you have a full backup from Sunday and a gradual for Monday, Tuesday, and Wednesday,

you need four arrangements of media to reestablish the information. Each set, in turn, is an extra advance in the reestablish procedure and an extra disappointment point. At the point when systems were littler and scarcely any extended past the limits of a solitary area, organize the executives was a straightforward undertaking. In the present complex, multisite, half breed systems, in any case, the assignment of keeping up and checking system gadgets and servers, has become a muddled basic piece of the system administrator's job, however. These days, the job of system administrator frequently extends past the physical limit of the server room and arrives at each hub and part on the system. Regardless of whether an association has 10 PCs on a solitary portion or a multisite coordinate with a few thousand gadgets appended, the system administrator must screen all system gadgets, conventions, and utilization—ideally from a focal area. Given the sheer number and decent variety of potential gadgets, programming, and frameworks on any system, it is clear why the board is such a huge thought. Regardless of the system, the ability of the executives' procedure to improve administrator efficiency and lessen vacation, numerous organizations decide to disregard the board given the time engaged with setting up the framework or as a result of the related expenses. On the off chance that these organizations comprehended the potential reserve funds, they would understand that ignoring system the executives gives bogus economies. System executives and system observing are basic techniques to control, arrange, and screen gadgets on a system. Envision a situation wherein you are a system administrator working out of your primary office in Spokane, Washington, and you have satellite workplaces in New York, Dallas, Vancouver, and London. The system, the board, enables you to get to frameworks in the remote areas or have the frameworks inform you when something goes amiss. Fundamentally, organize the board is tied in with seeing past your present limits and following up on what you see. The system, the board, isn't a certain something. Or maybe, it is a gathering of instruments, frameworks, and conventions that, when utilized together, empowers you to perform errands, for example, reconfiguring a system card in the following room or introducing an application in the following state. Regular Reasons to Monitor Networks The capacities requested from arranging the board differ fairly among associations, yet basically, a few key kinds of data and usefulness are required, for example, issue identification and execution observing. A portion of the sorts of data and capacities that system the board instruments can give incorporate the accompanying: Use: Once upon a period, it was normal for a system to need to limp by with rare assets. Administrators would always need to trim logs and document records to keep enough extra room accessible to administration print employments. Those days are gone, and any such trace of those conditions would be unsuitable today. To shield this from occurring, one of the keys is to oversee the use and remain over issues before they arise. Five territories of use to screen are as per the following: Data transfer capacity/throughput: There must be sufficient transmission capacity to serve all clients, and you should be alert for transmission capacity switches. You need to search for top talkers

(those that transmit the most) and top audience members (those that get the most) and make sense of why they are so prominent. Extra room: Free space should be accessible for all clients, and portions should be executed. System gadget CPU: Just as a neighborhood machine will slow when the processor is pushed to the limit, so will the system. System gadget memory: It is difficult to have an excessive amount of memory. Parity burdens to enhance the assets you need to work with. Remote channel usage: Akin to transfer speed use is direct use in the remote domain. When in doubt, a remote system starts encountering execution issues when channel usage arrives at half of the channel limit. Flaw location : One of the most fundamental parts of the board is knowing whether anything isn't working or isn't working effectively. The executive's devices can distinguish and write about an assortment of issues on the system. Given the number of potential gadgets that establish a normal organize, deciding flaws without these devices could be an inconceivable undertaking. What's more, arrange the executive's apparatuses may identify the broken gadget, yet in addition, shut it down. This implies if a system card is breaking down, you can remotely debilitate it. At the point when a system traverses an enormous region, issue location turns out to be much increasingly significant because it empowers you to be cautioned to network flaws and to oversee them, consequently diminishing personal time. Execution checking: Another component of the system board is the capacity to screen arrange execution. Execution observing is a fundamental thought that gives you some urgent data. In particular, execution checking can give arrange use measurements and client use patterns. This sort of data is fundamental when you plan to arrange limit and development. Checking execution likewise encourages you to decide if there are any exhibition-related concerns, for example, regardless of whether the system can enough help the present client base. Security checking: Good server administrators have a dash of distrustfulness incorporated with their character. A system board framework empowers you to screen who is on the system, what they are doing, and to what extent they have been doing it. Progressively significant, in a domain where corporate systems are progressively presented to outside sources, the capacity to distinguish and respond to potential security dangers is a need. Perusing log documents to learn of an assault is a poor second to realizing that an assault is in progress and has the option to respond in like manner. Security data and occasion the executives (SIEM) items give warnings and continuous investigation of security alarms and can assist you with taking off issues rapidly. Connection status: You ought to normally screen interface status to ensure that associations are up and working. Breaks ought to be found and recognized as fast as conceivable to fix them or find workarounds. Various interface status screens exist to screen availability, and many can reroute (per a designed content document) when a down condition happens. Interface observing: Just as you need to screen for a connection going down, you likewise need to know when there are issues with an interface. Specific issues to look for incorporate mistakes, use

issues (surprisingly high, for instance), disposes of, bundle drops, resets, and issues with speed/duplex. An interface checking device is important for investigating issues here. Upkeep and design: Want to reconfigure or shut down the server situated in Australia? Reconfigure a nearby switch? Change the settings on a customer framework? Remote the board and setup are key portions of the system the board technique, empowering you to oversee colossal multisite areas halfway. Natural observing: It is imperative to screen the server room, and other key gear, for temperature and stickiness conditions. Mugginess control averts the development of electricity produced via friction, and when the level drops much underneath half, electronic parts become powerless against harm from electrostatic stun. Natural checking apparatuses can caution you to any threats that emerge here. Remote observing : As more systems go remote, you have to give exceptional consideration to issues related to them. Remote review apparatuses can be utilized to make warmth maps indicating the amount and nature of remote arrange inclusion in regions. They can likewise enable you to see passages (counting rebels) and security settings. These can be utilized to assist you with planning and send a proficient system, and they can likewise be utilized (by you or others) to discover shortcomings in your current system (regularly promoted for this reason as remote analyzers).

Chapter 11 Network Security

It does minimal great to have incredible system security if everything can be undermined by somebody strolling in your office, grabbing your server, and exiting the front entryway with it. Physical security of the premises is similarly critical to a general security execution. Preferably, your frameworks ought to have at least three physical hindrances: The outside access to the structure alluded to as an edge, which is ensured by movement identification, thief cautions, outer dividers, fencing, reconnaissance, etc. This ought to be utilized with an entrance list, which should exist to explicitly recognize who can enter an office and can be confirmed by a security watchman or somebody in power. A mantrap can be utilized to constrain access to just a couple of individuals going into the office at once. An appropriately created mantrap incorporates impenetrable glass, high-quality entryways, and locks. In high-security and military conditions, a furnished gatekeeper, just as video reconnaissance (IP cameras and CCTVs), ought to be utilized at the mantrap. After an individual is inside the office, extra security and verification might be required for further entrance. A bolted entryway with entryway access controls ensuring the PC focus and system wardrobes; you ought to likewise depend on such things as ID identifications, vicinity perusers/key coxcombs, or keys to obtain entrance. Biometrics, for example, unique mark or retinal outputs, can be utilized for validation. The passage to the PC room should be another bolted entryway that is deliberately checked and secured by keypads and figure locks. Even though you attempt to keep the same number of gatecrashers out with the other two boundaries, numerous who enter the structure could be acting like somebody else—warming and air professionals, delegates of the proprietor, etc. Even though these affectations can get them past the initial two boundaries, they should, at present, be halted by the bolted PC room entryway. Resources ought to have resource following labels connected to them that have exceptional identifiers for every customer gadget in your condition (normally simply augmenting numbers relating to values in a database) to assist you with distinguishing and deal with your IT resources. Moreover, alter discovery gadgets ought to be introduced to secure against unapproved body spread and segment expulsion. The target of any physical boundary is to avert access to PCs and system frameworks. The best physical obstruction executions necessitate that more than one physical hindrance be crossed to get entrance. This kind of approach is known as a different obstruction framework. Physical Security Measures Physical security is a mix of good sense and system. The reason for physical security is to limit

access to arrange gear just to individuals who need it. The degree to which physical safety efforts can be actualized to secure system gadgets and information depends to a great extent on their area. For example, if a server is introduced in a bureau situated in a general office territory, the main viable physical insurance is to ensure that the bureau entryway is bolted and that entrance to keys for the bureau is controlled. It may be useful to utilize other antitheft gadgets. However, that relies upon the area of the bureau. Be that as it may, if your server hardware is situated in a pantry or devoted room, get to limitations for the room are simpler to execute and can be progressively powerful. Once more, access ought to be constrained distinctly to the individuals who need it. Contingent upon the size of the room, this factor may present various factors. Servers and other key systems administration parts are those to which you have to apply the best degree of physical security. These days, most associations decide to find servers in an organizer or a particular room. Access to the server room ought to be firmly controlled, and all entrance entryways must be verified by some strategy, regardless of whether it is a lock and key or a retinal examining framework. Every technique for server room access control has certain attributes. Whatever the strategy for the server room gets to, it ought to follow one normal rule: control. Some entrance control strategies give more control than others. Additional Security Measures The physical security measures can help you to make the network secure, and with the help of these additional security measures, you can be saved from both internal and external network breaches: Change default certifications : The most straightforward path for any unapproved individual to get to a gadget is by utilizing the default qualifications. Numerous switches, for instance, come designed with an "administrator" account and a straightforward incentive for the secret phrase ("administrator," "secret word, etc.). Anybody owning one of those switches knows those qualities and could utilize them to get to some other of a similar make if the qualities have not been changed. To make it increasingly hard for unapproved clients to get to your gadgets, change those default usernames and passwords when you start utilizing them. Keep away from normal passwords : It is something worth being thankful for to lecture secret phrase security to clients, however frequently administrators are liable of utilizing too-oversimplified passwords on organizing gadgets, for example, switches, switches, and so forth. Given the huge number of gadgets being referred to, once in a while, similar passwords are likewise utilized on different gadgets. Good judgment tells each administrator this isn't right, however frequently it is done. Try not to be that administrator; utilize complex passwords and utilize an alternate secret word for every gadget, expanding the general security of your system. Redesign firmware: There is a motivation behind why every firmware update is composed. In some cases, it is to improve the gadget or make it increasingly perfect with different gadgets. Different times, it is to fix security issues or potentially head off recognized issues. Keep the firmware on your generation machines current after first testing the redesigns on lab machines and confirming that you're

not presenting any undesirable issues by introducing. Apply fixes and refreshes : Just as firmware overhauls are proposed to reinforce or take care of issues, fixes and updates do likewise with programming (counting working frameworks). Test each discharge on a lab machine(s) to ensure you are not adding to arrange misfortunes, and afterward keep your product current to solidify it. Check record hashes : File hashing is utilized to confirm that the substance of documents is unaltered. A hash is regularly made on a document before it is downloaded and afterward hashed after the download so the two esteems can be contrasted with ensure the substance is the equivalent. When downloading documents—especially updates, fixes, and refreshes—check hash esteems and utilizes this one test to keep from introducing those substances that have had Trojan steeds connected to them. Impair superfluous administrations : Every pointless help that is running on a server is similar to another entryway on a distribution center that somebody unapproved may decide to sneak in. Similarly, as a successful method to verify a distribution center is to diminish the number of ways to just those required, so too is it suggested that a server be verified by evacuating (impairing) administrations not being used. Produce new keys: Keys are utilized as a piece of the encryption procedure, especially with open key encryption (PKI), to scramble and unscramble messages. The more you utilize a similar key, the more extended the open door becomes for somebody to split that key. To expand security, produce new keys all the time: The directions to do so will vary dependent on the utility that you are making the keys for. Incapacitate unused ports : Disabling superfluous administrations (referenced beforehand) builds security by expelling entryways that somebody could use to enter the server. Correspondingly, IP ports that are not required for gadgets likewise speak to entryways that could be utilized to sneak in. It is exceptionally prescribed that unused ports be crippled to expand security alongside gadget ports (both physical and virtual ports). Access Controls Access control depicts the systems used to channel system traffic to figure out who is and who isn't permitted to access the system and system assets. Firewalls, intermediary servers, switches, and individual PCs all can keep up access control somewhat by ensuring the edges of the system. By constraining who can and can't access the system and its assets, it is straightforward why access control assumes a basic job in a security technique. A few sorts of access control systems exist, as talked about in the accompanying areas. Be certain that you can recognize the reason and kinds of access control. Obligatory Access Control Obligatory access control (MAC) is the most secure type of access control. In frameworks designed to utilize obligatory access control, administrators manage who can access and alter information, frameworks, and assets. Macintosh frameworks are normally utilized in army bases, money related establishments, and, due to new protection laws, medicinal organizations.

Macintosh verifies data and assets by allotting affectability names or ascribes to articles and clients. At the point when clients solicitation access to an item, their affectability level is contrasted with the objects. A mark is a component applied to documents, registries, and different assets in the framework. It is like a secrecy stamp. At the point when a mark is set on a document, it depicts the degree of security for that particular record. It grants access by documents, clients, programs, etc. that have a comparable or higher security setting. Optional Access Control The optional access control (DAC) isn't implemented from the administrator or working framework. Rather, access is constrained by an item's proprietor. For instance, if a secretary makes an organizer, he chooses who will approach that envelope. This access is arranged by utilizing authorizations and an access control list (ACL). DAC utilizes an ACL to decide access. The ACL is a table that illuminates the working framework regarding the rights every client has to a specific framework object, for example, a document, an organizer, or a printer. Each item has a security trait that recognizes its ACL. The rundown has a section for every framework client with access benefits. The most well-known benefits incorporate the capacity to peruse a record (or every one of the documents in an envelope), to keep in touch with the record or records, and to execute the document (if it is an executable record or program). Microsoft Windows servers/customers, Linux, UNIX, and Mac OS are among the working frameworks that utilization ACLs. The rundown is actualized contrastingly by each working framework. In Windows Server items, an ACL is related to every framework object. Each ACL has at least one access control section (ACE) comprising of the name of a client or gathering of clients. The client can likewise be a job name, for example, "secretary" or then again "investigate." For every one of these clients, gatherings, or jobs, the access benefits are expressed in a series of bits called an access cover. By and large, the framework administrator or the item proprietor makes the ACL for an article. Rule-Based Access Control The (RBAC) controls access to articles as indicated by setting up rules. The design and security settings built upon a switch or firewall are a genuine model. At the point when a firewall is arranged, decides are set up that control access to the system. Solicitations are looked into to check whether the requestor meets the criteria to be permitted access through the firewall. For example, if a firewall is arranged to dismiss all locations in the 192.166.x.x IP address go, and the requestor's IP is in that range, the solicitation would be denied. In a reasonable application, RBAC is a minor departure from MAC. Administrators commonly design the firewall or another gadget to permit or deny access. The proprietor or another client doesn't indicate the states of acknowledgment, and protections guarantee that a normal client can't change settings on the gadgets. Job-Based Access Control In job-based access control (RBAC), access choices are dictated by the jobs that individual clients have inside the association. Job-based access requires the administrator to have an intensive

comprehension of how a specific association works, the number of clients, and every client's accurate capacity in that association. Since access rights are assembled by job name, the utilization of assets is confined to people who are approved to accept the related job. For instance, inside an educational system, the job of educator can incorporate access to specific information, including test banks, look into the material, and reminders. School administrators may approach worker records, budgetary information, arranging undertakings, and then some. The utilization of jobs to control access can be a powerful method for creating and implementing undertaking explicit security approaches and for streamlining the security of the board procedure. Jobs ought to get just the benefit level important to carry out the responsibility related to that job. This general security guideline is known as the least benefit idea. At the point when individuals are procured in an association, their jobs are unmistakably characterized. A system administrator makes a client represent another worker and spots that client account in a gathering with individuals who have a similar job in the association. Least benefit is regularly too prohibitive to be in any way commonsense in business. For example, utilizing instructors, for instance, some more experienced educators may have more duty than others and might require expanded access to a specific system object. Tweaking access to every individual is a tedious procedure.

Chapter 12 Network Troubleshooting

The network administration has many responsibilities and duties. The most practiced one is troubleshooting. Troubleshooting is very necessary for the network. No matter how much the network is advanced and has many preventive care maintenance schedules, the troubleshooting is essential. Due to this, the troubleshooting sills must be developed by the network administrators. The proceeding chapter will deeply focus on the troubleshooting, some of the utilities, and tools that can help you get out of the troubleshooting problem. Steps and Procedure of Troubleshooting There are specific steps of the effective network troubleshooting. These steps help to perform the troubleshooting process easily and effectively. If you follow the step correctly, then the problem is fixed in lesser time, and your precious time is saved. The CompTIA Network+ Objectives identify the procedure. The steps of the troubleshooting process are: Identify the Problem The first and foremost step of the troubleshooting process is identifying the problem. In this step of the process, the information is gathered, the symptoms are identified, the users are questioned, and then determine if anything is different. For gathering the information, you should have good communication skills, little patience, and sufficient knowledge of the operating system used. After you know the symptoms, then start to identify the potential causes of the symptoms. Identify Symptoms Some of the problems of the computer only affect the single user in a single location, and some problems affect thousands of users in multiple locations. The important part of the troubleshooting process is to establish the affected area. This helps to point out the strategies which can be used for resolving the problem. The problems which affect many people are associated with connectivity issues. The troubleshooting problem of the single user begins and ends at the user’s computer. The first clue of the problem comes from understanding who is affected by the problem. Figure Whether Anything has Changed Sometimes there is a problem with the workstation’s access to an entire network or the database. Some people also claim that their computers stopped working. These problems arise due to the newly installed applications, new hardware, change in the position of the computer, recent updates, new passwords, or the username. Creating the correct recent changes in the system will help you to troubleshoot a problem.

Build a Theory of Probable Cause There are chances that the single problem on the network has many causes. With the help of sufficient and appropriate information, you can eliminate many of the causes. For finding the solution of the cause, first, look at the easiest solution. Even for complex network designs, sometimes, the easiest solution is the right one. For example, if the user cannot log on to the network, before replacing the network interface card, it is better to confirm the network settings. Test the Theory to Determine the Cause Establish the theory and confirm it. This helps to determine the cause. It can be understood easily as if the user can't print because they downloaded new software. The new software has probably changed the print drivers. If the theory is confirmed, then the next step is to plot the course of action. If the theory is not Create a Plan of Action The next step is to create a plan of action. Planning is a very important part of the whole troubleshooting process. It involves formal or informal written procedures. When the plan is ready, then you are ready to implement a solution. The solutions include replacing the hardware, apply the patch, plug in the cable, etc. In the ideal situation, the first solution will fix the situation, but unfortunately, if the solution does not fix the situation, then you need to start again. Implementing the Solution or Escalate After applying the corrective measure to the network, workstation, or server, you need to test the results. This is the time when you find out whether you were right, and the remedies you applied worked or not. Determine whether Escalation is Necessary There are times when the problems are outside of the scope of your knowledge, and you need additional help. The procedures of the technical escalation do not follow the specific set of rules; the procedures vary from situation to situation and organization to organization. The general and first rule is to start with the closest help. If your organization has the IT team, then talk to the employees of the IT team, every person has different experiences, and someone may know about the solution of the issue at hand. If you still don’t get the solution, then it is important to notify your supervisor about the issue. The problem may bring down the server. Verify the Full System Functionally There are times when applying a fix that corrects one problem but creates another problem. This can be understood by taking the example as when you add a new network application, and the new application requires more bandwidth than the older one or the current network infrastructure can support. Consequently, network performance will be compromised. The changes made to the network will always have the effects; those effects can sometimes be even negative ones. Changes in one network area will affect the other area of the network. The actions and changes such as adding clients, replacing switches, or hubs can have unexpected results. It is difficult to predict the effects of the changes. The safe way is to assume every possible effect. For carrying out

this thing, you need to think out of the box. You must verify the full system functionally. This is to be done before you are satisfied with the solution. After obtaining the satisfaction, you should look deeper into the problem and implement the preventive measures so that the same problem will not occur again. Document the Findings, Actions, and Outcomes The step which is often neglected in the troubleshooting process is documentation, but documentation is as important as any of the other steps and procedures. This step includes the record of the steps adopted during the fix, not just the solution. Adequate and sufficient amount of the information should be included in the document so that other network administrators can use it in the future. The information, such as when the solution was applied. Why was the problem fixed? If, in the future, the same genre of problem occurs, then it is easier to apply the solution. Every kind of information should be included, such as several software patches or firmware, etc. The result of both successful and failure should be comprehended. The information about the failure will help the network administrators to go down the same road twice, and the successful solutions will help to reduce the time to fix the solution. Sufficient information about the person who has fixed the problem is part of the documentation. This is a vital part because the person can be easily tracked down. If anyone wants to ask a question about the solution, then he/she can contact the relevant person.

Chapter 13 Hardware and Software Troubleshooting Tools

The network administration involves using the tools for the job. Using the right tools and having the knowledge of when and how to use them is very important. It looks that selecting the correct tool is easy, but the network administrators have a wide variety of tools, and choosing the correct tool can be mind-twisting. There is a wide variety of tools available, and they are discussed in this chapter. The Basic Tools There is a wide variety of tools available for network administrators. The most commonly used tools only cost a few dollars, which is a very good thing. One of the most commonly used tools is the screwdriver. The common screwdriver can help you to replace a network interface card (NIC), remove the switch to replace a fan, or the cover form a hub. The advanced, specialized, and expensive tools will not help you when the simple screwdriver is enough to pull your work off. Wire Crimpers, Strippers, and Snips One of the tools which are regularly used is wire crimpers. The wire crimper is the tool that is used to attach the media connectors to the end of the cables. Different types of wire crimpers are used for attaching different connectors and cables. Such as one type of the wire crimper is used for attaching British Naval Connectors/Bayonet Neill-Cpncelman (BNCs) to the coaxial cabling and another type will be used to attach RJ-45 connectors on the unshielded twisted-pair (UTP) cable. The wire crimpers are like the pair of special pliers. The cable and the connector are inserted separately into the crimper, and then you squeeze the crimpers handle; that is how the wire and the connector are connected. The other wiring tools which are commonly used are snips and strippers. The wire stripper has a wide variety of sizes and shapes. Every stripper has different functions, such as some work with the UTP cable, and others are manufactured to strip the outer sheathing from the coaxial cable. For removing the sheathing from the wire, the strippers are used. They are specifically designed so that clean contact can be made. To cleanly cut the cables, the wire snips are used. If the network administrators but the cable in bulk, then the wire snips are used to cut the cable into the required and desired length. For preparing the cable for the attachment with the connectors, the wire strippers are used. Tone Generator and Probes Many hours of frustration and irritation can be used with the help of a toner probe. This device has two parts which are: the tone generator, also known as toner, and the tone locator, also known as a

probe. The function of the toner is to send the tone at the end of the cable, and the probe at the other end receives the toner’s signal. This tool helps to find the beginning and end of the cable. The tone generator and the tone locator are also referred to as the fox and hound. Though the device is useful, it also has drawbacks such as it can be time-consuming. It is timeconsuming because it should be attached to each cable independently. Loopback Adapter Many items fall under the category of loopback umbrella. These items allow you to configure and test a device. Windows has the loopback adapter which has no hardware and is a dummy network used for testing the virtual network environment. Many loopback adapters with the actual hardware can be bought, and they can be used for testing fiber jacks, ethernet jack, etc. Protocol Analyzer The protocol analyzers, as the name indicates, they are used for analyzing the protocols such as UDP, HTTP, TCP, and FTP. They can be software or hardware-based. They are used for many purposes, such as help in diagnosing computer networking problems, identifying unwanted network traffic, and aware of unused protocols. They capture the communication stream between the systems like the packet sniffers. The protocol analyzer reads and decodes the network traffic. The administrator views the network communication in English with the help of decoding. This helps the administrator to get a better idea of the traffic flowing on the network at that time. When the damaged or unwanted traffic is spotted, the analyzer isolates it. The protocol analyzer also helps you in justifying to the management for the purchase of new hardware. The justification is provided with the help of real-time trend statistics. The protocol analyzers are used for two main key reasons. It helps to identify the protocol patterns and decoding the information. Media/Cable Testers A media tester is also known as a cable tester. It has a wide range of tools that are manufactured to test the cable. It tests whether the cable works properly. The specific tool known as media tester helps the administrator to test the segment of the cable. It checks the improperly attached connectors, shorts, or any other cable faults. The media testers tell whether the cable works properly and efficiently. It also tells where the problem is. The ‘line tester’ is used for checking the telephone wiring. Ethernet line testers and fiber line testers are also available. TDR and OTDR TDR stands for a time-domain reflectometer. It is the device that is used for sending a signal through

a medium to check the cable’s continuity. The good quality TDR helps to locate many types of cabling faults. The cabling faults such as damaged conductors, severed sheath, faulty crimps, loose connectors, shorts, and many more. Most of the network cabling is copper-based, so the tools designed are also copper-based. The optical testers are used when you test fiber-optic cable. The optical cable tester accomplishes the same function as the wire media tester, but it works on optical media. There is a common problem with the optical cable - a break in the cable. The break stops the signal from reaching the other end. When you point out that there is a break, the problem is to locate the break. To locate the break, there is a tool known as an optical time-domain reflectometer (OTDR). By using these tools, you can locate the break and determine how far along in the cable it occurs. Multimeter Multimeter one of the simplest and easiest cable-testing devices. For checking the shorts in the length of the coaxial cable, the multimeter is used. For this purpose, a continuity setting is used. You can also test the twisted-pair cable if you have the needlepoint probes and know the correct cable pinouts. The basic multimeter is also used for the measurement of the resistance, current, and voltage. For measuring them, the basic multimeter is combined with several meters into the single unit. The advanced models of the multimeter also measure the temperature. The multimeter consists of the terminals, display, dial, and probes. The dial is used for selecting the measurement ranges. There are two types of multimeter: the analog and digital multimeter. The digital multimeter has a numeric digital display, and analog one has the dial display. Inside of the multimeter, there are terminals that are connected to the different resistors. The connection depends on the range selected. The network multimeter performs various functions such as ping specific network devices. The multimeter has the ability to ping and tests the response time of the key networking equipment such as DNS servers, DHCP servers, routers, and many more. The multimeter can also verify the network cabling. It can be used to split pairs, isolate cable shorts, and other faults. The multimeters help the administrators to locate the cables at the patch panels. The administrators use the digital tones to wall jacks. The results of the multimeters can be downloaded for the inspection. The network multimeters have the USB port, which helps to link to the PC. Spectrum Analyzer For measuring the magnitude of the input signal, the spectrum analyzer is used. Nowadays, the spectrum analyzer is used with the Wi-Fi to show the Wi-Fi hotspots. It is also used for detecting wireless network access with LED visual feedback. These kinds of devices are used in the troubleshooting process to detect where the powerful RF

signals are. Packet Sniffers The packet sniffers are also known as the packet/network analyzers. They are commonly used on the networks. They help perform packet flow monitoring. They are either software or hardware device. The packet sniffer captures the data and saves it. This helps to review the data later. The packet sniffers are also used on the internet for capturing the data traveling between the computers. The internet packets travel a long distance; they go through different routes, servers, and gateways. Along these paths, the packet sniffers sit quietly and collect data. Two key defense techniques can be used to protect the network. You should use a switched network. In the switched network, the data is sent from one PC and is directed with help from the switch. The switch directs the data only to the intended targeted destination. In the older networks, the hubs were used, which does not switch the traffic to the isolated users but to all the users who are connected to the hub’s ports. Sensitive data should be encrypted. The encryption is implemented with the help of HTTPS and Secure Sockets Layer (SSL) protocol. The IPsec protocol provides end-to-end encryption services for public networks. Port Scanner They are the software-based security utilities. If you want to search a network host for the open ports on a TCP/IP-based network, then the port scanner is used. Some of the port scanners are closed, and some are open by default. It depends on the OS. The ports can cause trouble. The port scanners can be used by the administrators as well as the hackers, just like the packet sniffers. Hackers use the port scanner to find the open port. When they find the open port, they use it to access a system. By using modest money or even free of cost, the post scanners can be obtained through the internet. After installing the port scanner, the scanner probes the computer system running IP/TCP looking for the TCP or UCP port, which is open and listening. There are several port states when the port scanner is used. The open/listening is the state when the host sends a reply, which indicates that a service is listening on the port. It shows that there was a reply from the port. The denied or closed is the state when the access to the port is denied. The blocked or filtered is the state when the port is filtered and secured. No response is shown from the host. The administrators should be well aware of the ports which are open and vulnerable because others can review the status of the ports. There are utilities and tools available which can be used for this purpose. Netstat is used for knowing the quick status of the ports. Wi-Fi Analyzer Many networks are wireless these days, and there are issues with them which you need to pay

attention to. There are wireless survey tools that show the quality and quantity of the wireless network coverage in the areas using the heat maps. These tools also help you to see and analyze the security settings and access points. You can deploy and design an efficient network with the help of them. They can also be used to find weaknesses in the existing networks. Bandwidth Speed Tester and Looking Glasses The two extremely useful websites in the networking are speed test sites and looking-glass site. Speed test sites are the site from where you can check the speed of the connection you have. It helps in determining whether you are getting the speed which has promised. The routing information is gathered with the help of Looking-glass sites. They are the servers that run the looking-glass (LG) software. They are the read-only portal, which gives information about the backbone connection. These servers show the Border Gateway Protocol information, ping information, and trace (traceroute/tracert) information. Environmental Monitors The environmental monitoring includes the temperature of the network equipment rooms and server. Computer equipment has a wide range of heat tolerance range. A typical system can easily operate in the range between 50 Fahrenheit and 93 Fahrenheit, which is equivalent to 10 degrees Celsius and 33.8 degrees Celsius. The accepted optimum temperature of the server rooms is around 70 Fahrenheit to 72 Fahrenheit and 21 degrees Celsius to 22 degrees Celsius. At this temperature, the equipment in the room operates well, and people working in the room feel comfortable. Human beings require the temperature higher than the computer equipment, so that is why the servers should be placed in another room, not with the people. Overheating is one of the biggest problems of the servers and network equipment. The servers generate a lot of heat, and overheating of the component leads to component failure. There a rule of the heating process that heating cause the compounds or the components to expand, and the cooling causes the contraction. The changes in the temperature cause the chip to shift. When there is a shift in the chips, then there are chances that they will be separated from their connections. This process is known as chip creep. To minimize this situation, the temperature is kept at a moderate level. When the temperature is at a moderate level, then there is less expansion and contraction, which will consequently increase the reliability of the components. The environmental monitors are a really important part. They help the administrator to keep the equipment rooms at the perfect and right temperature. The environmental monitors are placed in the room, and they constantly show the fluctuation in the room temperature. It also shows the level of humidity. If there is a profound change in the temperature, then the alert is sent to the administrator. The changes in the temperature takes place when some equipment produces a lot of heat, or the air conditioner does not work well. They often are not needed, but having them installed gives the administrator peace of mind. Keeping It Cool If there is an issue of heat in the equipment room, then the solution is very easy. To minimize the

problem of heat, the air conditioning units are used. You cannot use any air conditioner for this purpose. The late 1060s window unit works best in such situations. The high quality of protection is a must in such cases. The constant temperature is maintained with the help of server environment-specific air conditioning units. The high-quality air conditioning units promise the accuracy of minus or plus 1 Fahrenheit. Some of the air conditioning units have a audible alarm so they can communicate with the management system about the server room temperature. Selecting the right type of air conditioner for the server room can be tricky for the network administrator. There are many sizes and types of air conditioners for the server room. They are rated on how much area in cubic feet they can cool. The network administrator should have basic information about the room before choosing the right AC. The basic information includes an increase in temperature caused by hardware in the room etc. The standby ACs are also installed in the server rooms. This all depends on the price range and the company’s will to pay. This also depends on how much the fault tolerance you need.

Chapter 14 Troubleshooting Common Network Service Issues

There are a lot of common problems with the network services. The network administrators should be aware of the network problems more than the common people. Some of the common network problems are listed below. Names Not Resolving The users cannot take advantage of the DNS service when the wrong Domain Name Service (DNS) values are entered. During the router configuration, it is important to enter the correct value of the Domain Name Service (DNS). When the wrong values are entered, the resolution many not occur o could take a long time to occur. This gives the appearance that the web is taking a long time to load. To avoid the problem of name resolution, it is suggested to put the correct values of the DNS in the router configuration. Incorrect Gateway The default gateways are used to transfer the data when the user specifies no other route. When you are planning to add a new router to the network, make sure that you add gateways and paths to ensure the swift data transmission. Incorrect Netmask The incorrect subnet mask is one of the other network service issues. When there is an incorrect subnet mask, then the router does not work properly. The function of the router is to route the incoming traffic. When there is a wrong value, then the router routes the traffic to the subnets, which do not occur. To avoid this problem, the value of the subnet on the router should match the configuration of the network. Duplicate IP Address The IP addresses on the network should be unique. The router, host, and every network card should have a unique IP address. There are two possible scenarios when there is a duplicate IP address. In one case, you will receive messages showing duplication IP addresses, and in other cases, the network traffic will become unreliable. In these cases, you should correct the problem. You should make sure that duplicate addresses do not exist anywhere on your network and even on routers. Duplicate MAC Addresses The MAC address can't be changed and is hardcoded into the network interface card. It has two components. The one is a serial number that should be unique, and the other is identifying the vendor.

The value of the MAC addresses should be constant for RARP, ARP, and other protocols. If someone is trying to add a rogue device impersonating another, then you must find it and disable it. Expired IP Address The DHCP leases the IP addresses to the clients; it also renews the leases as long as the clients need them. When the address is expired, then it means that the server of the DHCP is unavailable or down. When the server is down, then client will automatically lose the address. Each system should be assigned the unique IP address so it can communicate on the network easily. The clients in the LAN have a matching subnet mask and a private IP address. Rogue DHCP Server The server, which is not under the control of administration and is added by the unauthorized party, is known as the Rogue DHCP. The network administrator does not control the rogue DHCP. It is used for multi-purposes such as to set the client up for the network attack or to give the false values. Untrusted SSL Certificate The untrusted SSL certificate is the one that has expired or the one which is not signed. Sometimes this issue is caused when the user is using the browser, which is not widely supported or is older. Incorrect Time The incorrect time can be annoying. To avoid this problem, most network devices use the Network Time Protocol (NTP) to keep the time of the system as defined by the designated server. The network administrators should make sure that the time on the system should be updated, secured, and patched. Exhausted DHCP Scope The DCHP server can issue a lot of possible IP addresses, and DHCP scope is the pool of the possible IP addresses, the server can issue. If the pool becomes exhausted, then it cannot give the devices the values they need. The solution to this problem is to increase the scope. Another solution is to decrease the lease time of the addresses. When the lease time of the addresses is reduced from the days to hours, then the hosts will leave the addresses, and they will be available for other users. Blocked TCP/UDP Ports According to the security rules, only the needed ports should be allowed and enabled on a network. The network administrators do not have an idea which port they need; in some cases, they need the blocked UDP/TCP ports. If the port you need is blocked by the firewall, then open that port and allow it to be used. You have to make an exception. Incorrect Firewall Settings The firewall setting deals with the opening and closing of the ports. The incorrect firewall settings block the ports which you need and open the ports which you don’t need. Opening the unnecessary ports is worse because from there, the intruder could access the system. The open port depicts the

vulnerability. The network administrator should be well of the open ports and the close the unnecessary port due to security purposes. Incorrect ACL Settings Who and what can access your system is the purpose of the Access Control List (ACL). The incorrect ACL settings can keep too many off, but the common error is allowing too many on. When the ACL system is used properly, then you can ignore requests from the specified systems or users. If you find that some IP address is constantly scanning your network, then you can block that IP address. If you block the IP address at the router, then it will be rejected automatically anytime it attempts to use the network again. Unresponsive Service There are times when the service does not respond. This is actually due to some issues such as the service could be down, overload, or due to the bad configuration. The step is to investigate what is the real problem out of these three and then decide the method for fixing it. If the service or server is overload, then the capacity should be increased or look for a way to balance the load. If the service or server is down, then examine the ways to bring up the service again. If there is a problem in the configuration, then make the necessary changes to configure it properly. Hardware Failure Finding hardware failure is a challenge for the network administrators. It is not an easy job. It consists of many processes, such as performance monitoring and baselining. One of the ways to know the hardware failure is to know what and which kind of devices are used on the particular network and after that the function of the device.

Conclusion Passing the Network+ exam will open new doorways for you in the networking industry. You need to make sure that you study and learn all the concepts depicted in the book to ensure that you pass the exam. The exam will test the networking basics and will also check your understanding regarding the networking devices, their troubleshooting, and how you can increase the proficiency of the wired and wireless devices. You will also find the information to pass all the objectives of the Network+ exam. All the information is easily accessible and identifiable in the text. Within the chapters, you will find the important exam topics which you need to study to pass the exam. After reading the book and preparing hard for the exam, you will be able to attempt the Network+ exam easily. This book has many additional resources that will help you prepare for the exam in the best possible manner. You should not just read the chapter content and make a habit to find the exam questions embedded in the book. Reviewing the questions and studying their possible answers will help you to prepare for the exam.

References https://startacybercareer.com/the-best-study-resources-for-the-comptia-network/ https://www.leaderquestonline.com/blog/comptia-exams-pass-your-test/ https://exam-labs.com/blog/7-ways-nail-comptia-certification-exam https://www.edusum.com/comptia/comptia-network-plus-n-plus-exam-syllabus https://professionaldevelopment.ecpi.edu/blog/7-solid-tips-for-passing-the-comptia-exams/

COMPTIA NETWORK+ Simple and Effective Strategies for Mastering CompTIA Network+ Certification from A-Z

WALKER SCHMIDT

Introduction The first question that poses itself is: What is the CompTIA Network+ Certification? The CompTIA Network+ Certification is an exam that shows whether you have adequate knowledge of networking to make a career in the IT market. Though it might appear as if it is platform-specific, it is far from it. The exam aims to prepare you for work on any platform you might encounter in the field of work. It is a cluster of all of the basics that you will need to master before specializing in a specific vendor solution. It is the only certification that accounts for knowledge of both wireless and wired networks. CompTIA’s Network+ is quite extensive, but it can be summarized very easily. It aims to give validation to individuals who possess the skills and knowledge that is required for the troubleshooting, configuration, and management of wireless and wired networks that you might encounter all around the world. To be certified under the CompTIA’s Network+ certificate, you are going to need extreme proficiency in emerging technologies in the field. The CompTIA’s Network+ is designated towards those who are professionals in the field and have fulfilled the requirements. It is crucial for developing a successful IT career, especially today when the field is both extensive and full of promising young talents. Many different positions greatly benefit from the certificate. Any kind of network administrator or technician will see great success in finding good work due to the certificate. Though there are more and more people who earned this certificate, its prestige is as recognizable as ever. With greater recognition in the field, the more high-paying jobs you will be able to get. This means that you will both earn more money and be happier about your work. The certificate makes sure that the individual has the proper knowledge of many important skills. The most basic skill that it teaches, checks, and improves is the maintenance and management of essential networks. This is a relatively basic skill, though very crucial. Knowing how complex and advanced networks work is what makes a good IT expert. However, the basics are the basics for a reason. Another skill that the test identifies is the ability to implement a network. This includes the ability to analyze networks and see their benefits and drawbacks, as well as the ability to design new ones. On top of that, you will need to have a solid grasp on security standards and protocols, as well as proper troubleshooting processes. The range of topics and domains that the exam covers is quite extensive, as well. It meticulously tests your knowledge of networking concepts and infrastructure. This is tightly connected to the design and implementation of networks. It moves the dialogue from theory to practice. It tests your abilities to set up a network both digitally and physically. As you can see, it is quite extensive and it tends to be very difficult for most people. The standards the certificate poses are quite a tall order to achieve.

It expects you to have the ability to identify the best practices and policies that can be used to manage and operate a network, your knowledge of common attack methods on networks, and your proficiency with the tools in the field. Most of these fields are covered in other exams that are given out by the CompTIA. The CompTIA is the world leader when it comes to vendor-neutral IT certificates. However, most of them are not nearly as extensive as CompTIA Network+ Certification. The company issued more than 2 million of these certifications all over the world. Ever since the company was created, it aimed towards giving IT professionals a platform to prove their worth and take charge in their field. The company has existed for more than 20 years. In that time, it has found great success in developing many different training and certification regarding most parts of the IT field. This includes computing support, security networking, Linux development, etc. These exams are constantly changed and updated to make sure that the certificates are all up to date and ready to face any challenge of modern technology.

Chapter 1 How Do I Get My CompTIA+ Network+ Certification?

The exam is composed of multiple-choice and performance-based questions. The test will not be much different than any other test that you might have had so far. We talked about the subjects that the test will cover, but knowing just that might not be enough. Of course, the most important thing you can do to pass the exam is to work hard on learning what needs to be learned. Hard work is something that cannot be replaced by any amount of talent, especially in the IT field — as with any other exam, being sufficiently prepared is what dictates if you will pass or not. Of course, everyone has their own way of approaching this. Any test in college or high-school can be an indicator of what kind of prep-work suits you. However, it is not as easy as it might appear. The first thing you need to do is understand what is expected of you. No matter the amount of experience you might have, going into the exam without knowing what’s on it would be unwise. You should take your time to get to know everything that is on the exam. If you know what kinds of questions you will encounter, you can see what you need to learn and what you already know. There are a few things you can do to make this easier on yourself. You can download the exam objectives, as well as practice questions. This is a good way to get a feel for how the test feels. CompTIA Network+ has a product page that you can review for additional help. You can find articles regarding the experiences of certification holders. You need to be aware of the gap between what you need to know and what you already know. The people taking the examination come from different walks of life. They all also have different amounts of experience in the field. There are no requirements for taking the exam. However, there are some guidelines. Read up on them and make sure that you are mentally prepared for the task ahead. The amount of knowledge that the exam-takers lack varies drastically. Regardless of how experienced you are, you should take as many exams as possible in the fields where your knowledge might be lacking. Being objective and honest with yourself will help you go through the tasks ahead more easily and thoroughly. When you identify the gaps that you have, you need to understand how they can be filled. CompTIA offers a set of resources that can help you learn and train for their exams. The set comes with books and prep software that will help you out immensely. The self-study guide that comes as a part of the set will give you the chance to develop all of the skills you might require in the IT field. The guide offers complete coverage of the subject that you will encounter on the exam.

The sets of practice questions that you can get on the official website are a great tool that you can use to learn. Don’t mistake the questions for those that will be on the actual test. Use this as a guideline more than a rule. The questions on the exam will be different and most likely more difficult. The CompTIA CertMaster is another line of products that will help you with preparing for the exam. CompTIA itself developed it to provide exam-takers with a platform that will help them learn the more advanced subjects of the exam. You might be one of the people who have trouble learning on their own. You might find it difficult to focus when studying. You might also have trouble with making studying a part of your daily routine. If that is the case, you are in luck. Many public academic institutions offer CompTIA Network+ certification training. If you are more comfortable with learning in a group and under academic circumstances, this is perfect for you. You will have to be ready to dish out a pretty penny, though. Though that is not always the case, you need to remember that these classes can cost anywhere from a few hundred dollars to several thousand. The classroom training page of the CompTIA website can help you find proper classrooms in your area. If you work as a part of an organization that currently aims to improve IT quality, CompTIA offers instructor-led training programs. A concern that people often have about getting certified is the amount of time that they might need to elapse to get the certification. The amount of time it will take to get it will always depend on how much you know and how much of a gap you need to fill. Another factor is how quickly you can learn and how confident you are with your knowledge. For example, some of the classroom programs will have you go through the material in no more than a week. On the other hand, some other classrooms can take as much as a few months. On another note, when learning on your own it may take you anyway from a few days to several months. There is no exact way to determine how much time it will take you. However it is recommended to take as much time as you can instead of rushing it. If you fail the exam because you rushed, you will end up spending more time anyway. The exam takes place at a Pearson VUE testing center. The environment will be highly secure. If such a center exists near you then you can use the opportunity to ask about the steps that you will need to take. The first thing you need to do is by your Voucher. The CompTIA Network+ voucher can be purchased from the website and contains a code that will let you sign up for the exam. If you want to know how much the voucher costs, visit the website. When purchasing the voucher you can instead purchase the bundle which includes it and a few learning guides. By doing so, you can save a lot of money. The next thing you need to do is scheduling your exam. To take the exam you need to locate an approved location. This, as mentioned before, is done inside of Pearson VUE testing centers. There you need to schedule an appointment. The exam will last for 90 minutes. However, remember that the exam is not about speed. You should

not rush yourself and take your time. It is important to remember to double-check as much as you can. Make your own pace and be comfortable with it. However, do not forget that it is only 90 minutes long. The exam might seem quite a bit of hassle and hard work. However, it is very important for any IT professional.

Why CompTIA+? The IT field is a flourishing one and one that is full of new prominent talents. In such a field it is hard to diversify yourself from the others and really pop out. There are just so many people that have their own unique abilities, as well as a huge knowledge of the technologies used worldwide. So, what can you do? There is a huge number of certificates out there. While they do not have any intrinsic value to them, their mention means quite a bit to any professional in the world. This means that several prestigious certificates will show that you are extremely knowledgeable. However, in a pool of so many different certifications, how do you recognize which are good for you? That is a bit of a difficult thing to summarize in a few sentences. It depends on what you want to do and what your employers are looking for. The good thing about CompTIA Network+ is that it is recognized by most if not everyone. If you obtain it, it is a show of your proficiency with a wide variety of technologies in regards to networking. It is a good start of any portfolio and will be sure to get you more work. What this means is that you will see an increase in your salary. You will also be more satisfied with yourself and so will your employers. Seeing more recognition in the field will show how serious you are about working and will open more career choices for you. Like most other trophies and accolades, the CompTIA Network+ Certification will make you feel better about yourself. It is a pat on the back for you, but from the whole field rather than another individual. You will know that you have enough knowledge to pass one of the most prestigious exams in the field. Seeing a lot of different career paths open for you will make you feel much happier. It will give a sense of stability to your life. It will help you start your career as a help desk technician, field service technician, computer technician, or even something as big as owning your own computer repair service. The perception of the market in regards to you will also change. You will not be seen as an amateur anymore. Rather, you will be perceived as a professional. People with this certification see a salary increase of 5 to 15 percent. These are just some of the advantages that you will enjoy after getting certified. You will only start to appreciate how much you gained from the certificate once you actually complete the exam. You might be wondering why it is that this certificate means so much to companies around the world. This is because your help to these companies might prove to be extremely valuable.

When a company hires an individual that is CompTIA Network+ certified, this will mean that their customer satisfaction is increased. This means that the company will grow more easily and will see an increase in profit. The logo with the CompTIA name will be a stamp of quality next to the name of your company. There are quite a few other reasons why companies are more keen to hire individuals that are certified. A certificate shows that you are a very skilled worker. This means that your company will see an increase in productivity and in competence standards. The company will also lose less money when training you. Some businesses go as far as hiring only people that have the certificate. This only shows how important the certificates are in the field. One such company is IBM. To summarize, when you apply for a job, having the CompTIA Network+ certificate does not only mean that you are bringing a lot of knowledge and expertise to the table. Rather, you are also bringing a lot of major advantages for the company. What this means is that you will be at a huge advantage when compared to the rest of the applicants. Another thing that CompTIA Network+ has over other certificates is that it is designed to be a sort of entry-level certificate. It does not go as deep into a single subject as many other certificates do. It also does not expect you to have any kind of incredible in-depth knowledge that you can get from years of experience with a single operating system. It is there to show that you are a good professional level IT technician — nothing more, nothing less. You will rarely need anything more to start off with and to be frank, good recommendations and experience will make your résumé look much more impressive than some unknown certificates with strange names.

Why Do I Need It? Well, to say that you “need” would be a bit too much. However it is not too far from the truth. To be honest, even today it is hard to be a recognized pro in the field. As the market grows and becomes more and more saturated, it promises to become even more difficult in the future. This certificate works as a form of security. Having it will mean that you are never at the bottom of the barrel. You will always have an advantage over other people. If you really want to be successful, this is a necessary first step. To succeed you need to have some important work to start with. This certificate makes sure that you have a good starting point. Small businesses that are looking for help will see you as an incredible opportunity. From that point on, it is just a matter of patience and hard work. In big firms, the certificate is a proof of knowledge. It shows any potential employers that you can hold your own and that you know more than your average worker. There are a lot of companies that take this as one of the requirements for employeeship. It is of huge importance for companies like Cisco, Canon and HP. In other companies the certificate will make a great first impression. Many reports show that 96 percent of managers see the certifications as hiring criteria. Again, reaffirming that a CompTIA Network+ certificate will be a great way to stay at a good place in the corporal food chain. A safe job is something that you will rarely find and a certificate will inch you closer to it.

Is It Too Late To Start? The great thing about IT is the fact that it is not a field where age can be a deterrent. While there is a bit of physical work involved in most jobs, it is usually not anything too straining. It is mostly about knowledge and adaptability. This is one of the best things about the field because it does not see age. It does not matter if you are 20 or 40; you can always start doing it. It is not like a sport where age can limit your performance, nor are companies inclined to ignore the older talents that are submitted. While it is true that there are companies that will see old age as a negative side, it will usually not make too much of a difference. This is because finding younger people means that they will stay with the company for longer. Younger talent will usually be desperate to accumulate work experience, so they will be more inclined to work for lesser salaries. On top of that, if they are satisfied with their employment, they have more time before retirement. Being younger also means that there are fewer risks when it comes to unexpected injuries or illnesses. However, to say youth is an advantage is redundant, no matter which job we are talking about. While being older might seem to be problematic, it is something that can be subverted through accolades and experience. This is where we can see how important certificates actually are. A company will always be more likely to hire the person with the certificate rather than the younger one. A certificate will be proof of your knowledge and experience. It immediately shows that the work you do while you are there will be of the highest quality. In this regard, the CompTIA Network+ certificate can be used not only to make you stand out, but to even the playing field too. The certificate itself does not care about age, rather a knowledge and experience. That being said, there is no such thing as starting off too late. Again, the IT field is one where you can put in good hours and see a lot of success no matter how old you are as long as you are willing to put in the work. You probably are, since IT is a work of passion rather than a necessity for most people. If you are truly passionate about doing the job, you will not feel your age. When it comes to being certified, the answer is the same. It is never too late to be a student of life. While you might have finished college decades ago, this might seem like a refreshing experience. Going through the motions you once did can be nostalgic. The exams do not discriminate when it comes to age, either. If you know what you need, that is enough. You will be certified and your journey into the field of IT can begin on a whole new level. This might be a dream of yours or you are just doing it out of curiosity. You might also see it as something to do in your old age when you are not in the position to accept any job that you might like. Be it as it may, a CompTIA Network+ certificate will give new fire to your career. Your age does not have to be a deterrent for anything. If you feel like you can do it, you probably can.

Chapter 2 Tips On Taking The Exam

We all know that it is not fun to take tests. Every one of us can remember the experiences they had in high school and college. It tends to be a nightmare, especially if the test is important. CompTIA exams are the same as they take quite a bit of time and preparation. The good part about CompTIA+ is that it is not a matter of memorization. It does not expect you to know every answer to any question by heart. Memorizing every question just will not work here. This test requires both recognition and knowledge. You will notice questions very similar to those you practiced, yet a tad bit different. This means that knowing what the answer is will hardly help you, however why the answer is will. You should avoid brain dumps. In this case, a brain dump would be somebody posting answers to a test they just completed. Again, this is a good place to start. It gives you a direction to go. Again, do not aim to learn the answers to these questions, rather try to understand why it is like that. These answers can often be false, too. These kinds of answers do not go through any quality control. When approaching posts like this, remember that the people doing this are doing something illegal. Before they do the exam they sign a non-disclosure agreement. To reiterate, the best place to get your learning material is the official website. It will give you the most insight into the test objectives, as well as the most extensive source of knowledge. Knowing the test objectives is extremely important. It helps you focus your efforts on certain important subjects, while telling you how much more work needs to be done. It is a good way to gauge how quickly you are going through the material. Studying objectives can also help you do the test more efficiently. While doing the exam you will know how many more subjects there are to go through. Knowing the objectives also means that you are more prepared to pass the test. Before starting with all of that, however, you need to figure out what kind of learner you are. Everyone learns at their own pace and in their own ways. Maybe flashcards help you out or studying in a sterile, silent environment. You will likely benefit from studying regularly every day. When you learn what kind of learner you are, you can start working on it more efficiently. It will make your studying faster and long-lasting. There are a few quizzes online that can help you figure this out. While it might seem like a small thing, it can make all the difference when you go deeper into your studies. It is also a generally useful thing to know that might pop up again eventually. What you can do to start your journey off is set up a network or build a computer. The exam is one of the applicable knowledge. This means that you will benefit more from trying things out instead of cramming from a textbook. This is both a great way to prepare and get some experience in actual situations. This might seem daunting at first, but if you are as enthusiastic about being in the IT field as you should be, this will be an exciting opportunity to experiment. Setting up a network is the best

way to understand how one works. It all comes down to actual experience. Knowing which port is used for what will come much easier when you have had the time actually to use it. This serves another purpose. It gives you a lot of knowledge and experience that will be useful to you once you actually start working. Do not underestimate the value of practice. CompTIA has a lot of different sets of practice questions and they come in many different varieties based on different subjects. They not only aim to prepare you for this exam, but many other CompTIA exams as well. They aim to not only give you a general sense of how ready you are, but also help you focus on areas that you might find problematic. These practice questions are there for you to review them and learn from them afterward. While reviewing your answers you can notice patterns in your mistakes and gaps in your knowledge. Once you see where you went wrong, you can fix it more easily. While doing these practice exams, remember that these questions are more of a guideline than anything. If there is a question that seems too niche or too hard, do not be afraid to skip it. This can be said for the actual exam as well. Do not be afraid to skip questions that you do not know; they can waste more time than they are worth. You can also return to them later on. Your knowledge might come to a new light on a second try. You might also remember something that you did not before due to another question reminding you of it. When doing the exam, you should be mentally ready for the performance-based questions. These questions are perhaps the trickiest on the exam. These questions expect you to perform a task inside of a simulation successfully. While being more complex than multiple-choice questions, they are in no way impossible. They can, however, seem intimidating at a glance. If you are not careful enough, it might bring your exam to a complete halt. Remember that most of these questions will be somewhere near the start of the test. Also remember that they are doable even if they might not seem so. They can be prepared for. You can prepare for them with online labs. If you block during one of these on the exam, you should not waste your time by waiting. Take your time and come back to it later. In tests like these you will often see the answer to the questions inside of the questions themselves. Pay attention to this. This can save you a lot of time. It can also reset your morale by picking your tempo up. Another thing you can do to increase the rate at which you learn is by joining a community of people like you online. There are quite a few CompTIA study groups online. They are a great way for the more social learners that are used to classrooms. In these groups, you can often find great resources and suggestions. It can also help you out by giving you the morale boost that you might need. You can find a lot of groups like this on Reddit. There are quite a few subreddits dedicated to people who passed their exams and people that are preparing to do so. There you can get a lot of help. From general tips, to specific questions that you might find to be difficult. Remember that people who passed the exam are tied to a non-disclosure agreement, so they are not able to give you all the answers. You can look at the CompTIA exam like a marathon. A marathon isn’t something that you will stop

preparing for a week in advance. You would see that it takes a bit more time than that. You need to make sure that you have allocated sufficient time for studying and that you use it correctly. Do not be shy to postpone the exam if you need that. Do not be afraid to put more time into areas and subjects that you are weak with. Once you plan everything out, you just need to start studying hard. If you do not see success studying on your own however, remember that you can always hire professional help. This is often done by people who have already failed the test in the past. The CompTIA exams do not come for free, so if you do not want to waste your money, you want to ace them in as few tries as possible. While self-study is important and is the cheapest out of your options, do not be afraid to ask for some help. That is why you can find courses led by professionals on the internet. They will help you ace your courses and get your certificate. They also give you valuable experience that will be a huge help when you finally sail the waters of IT. These professional-led courses come in many shapes and sizes. You can find classes that fit your schedule time-wise and if you live far away from the location where they are held, you can take them over the internet. These courses not only help you get CompTIA certified, but they also assist you with getting hired. This means that they will help you optimize your resume and even link you with their business partners.

Things You Can Do During The Exam While doing the exam, internalize the answers. Multiple-choice questions can often be a little piece of hell. They are a great way for you to start doubting the things you know. There are a few tactics that you can use to make it easier. The first thing you should do when you aren’t sure of the answer is cutting out every answer that you are sure isn’t correct. After that, focus on remembering as much information about the question as possible. By not focusing on the question itself and thinking about the whole subject, you can get to the answer after a while. When all of the answers seem wrong, it means that you read the question poorly. It is very important to treat yourself well on the day of the exam. Being in good physical condition will lead to you feeling more confident and showing better performance. Make sure that you are wellrested, fed, and hydrated. Be sure to treat your body well and it will return in kind. Having a lot of protein in the morning will have you bursting with energy. Make sure to get to know the location where the exam will take place. This is very important because any errors can lead to you being failed. If the exam requires you to travel to a new location make sure to have your route planned out. Make sure to get to know the traffic that leads to the locations so that you can account for being late or early. Alternatively if you do not own a vehicle make sure to research which options you have and which are the best. Being confident in yourself is also very important for your success. Make sure to build up your confidence in whatever way you can. Give yourself a pep talk. Make sure that you know how well you have done so far and use that as fuel to ace the test.

There are a lot of things that you can do to optimize your exam-taking experience. The plethora of subjects covered and tricky questions can often leave your brain rumbling. It is easy to get confused while doing the exam and it’s not at all rare for even the best-prepared people to fail. Why? They did not have the right mentality for the test. If you want to pass the exam, studying and gaining knowledge is very important, as you might have known. However, knowledge just might not be enough sometimes. This is often the case with firsttimers. More experienced exam takers can easily identify the patterns in which the questions are asked. This is very important as it is the key to efficiency while taking the exam. We have mentioned PBQ (Performance-Based Questions) before. They can be quite tricky for most takers, especially those who have not dedicated a lot of time to them. CompTIA, however, believes that these questions show the abilities of the exam-taker the best. PBQs require problem-solving capabilities. Unlike multiple-choice questions, you do not have the luxury of memorizing the answer; you need to find it. When approaching a PBQ, do not just rush it. Take your time and get to know the problem presented in the question. Calmly analyze it and then start working. We have also mentioned that the answer to a question can often hide within the question itself. Make sure to pay attention to any capitalized or bold words as they are there for a reason. If you are having trouble with a question take a step back and formulate a plan that revolves around key-phrases. Sometimes the answer will lie in a previous question. Make sure to learn something from each question and always remember to back-track as it can give you those additional few points you might need. It is important to remember that this exam does not have negative points. This means that if you do not know the answer to a question, you can just gamble. You do not have anything to lose and you have the potential to gain quite a bit.

Chapter 3 Further Into Topologies

One of the most important subjects covered by the CompTIA Network+ is network topologies. A network topology is a formation in which multiple computers or systems are connected. We have discussed topologies in the previous books. However there is quite a bit to be discussed when we go in-depth. Today, even our daily lives rely on wireless networking methods; it’s no longer simply something used by large organizations. Today, you can hop into your car and listen to music on the radio. Then, you can get home and browse social media on the internet. With that being said, wireless networks don’t just pop out of thin air, and they still rely on a set of physical media. This backbone is found within the majority of the installed LANs of today, usually via cabling or similar methods. In this chapter, we’ll be dealing with all of these, just more in-depth than in the last part of this series.

Coaxial Cable A coaxial cable, or coax, is made up of a central conductor (usually made up of copper surrounded by plastic, and then having a braided shield around it.) This plastic is usually PVC or FEP (AKA Teflon.) The Teflon cover is usually called a plenum-rated coating. While these are quite expensive, many locales make their usage mandatory, as they make the cabling safer. The plenum rating is applied to pretty much all kinds of cabling, and it is a good replacement for most other kinds of cable sheathings or insulations. The main difference between a plenum and regular cable comes down to the construction, as well as where they’re appropriate to get used. A lot of large buildings are made so that the air within circulates from one floor to the other. The space between these floors is often called the plenum. It is where cables are usually stored, as running a ton of computers with all of their cables indoors would be an absolute nightmare. The biggest issue with plenum cables is when there’s a fire. Their insulation is quite poisonous when inhaled. This can circulate through the whole building, affecting everyone inside. To prevent this, most buildings install hardcore fire protocols. Besides that, plenum cables make it easier to spread the fire from floor to floor. Because of this, the NFPA (National Fire Protection Association) has decreed that cables that are within the plenum have to be tested to be safe. It must be ensured that they are fire retardant, and that they create the minimum smoke possible. Because of this, using a non-plenum cable within the plenum is essentially illegal. Now, that isn’t to say that non-plenum cables don’t have their uses. There’s a lot of places where they’re perfectly safe.

They are also a lot cheaper, for example, Thinnet (10Base-2) or Thin Ethernet cables are thin, coaxial cables. They serve the same essential purpose as a regular coaxial cable, except it takes only a quarter of the space. These cables are Radio Grade 58. If you decide to use these cables, you’ll have to use BNC connectors to get different stations attached to the broader network. Today, these cables aren’t all that popular. Our most used cables are 750ohm coaxes for our televisions. Using coax for ethernet is pretty outdated, however, you’ll still find some companies that favor it.

Twisted-Pair Cables Twisted-pair cables are made up of more than one insulated wires that are then connected in pairs. On occasion, a metal shield will be put around them. This is where their name, shielded twisted-pair comes from. They are also referred to as STP. On the other hand, the unshielded counterparts are UTP. Ethernet cables are formatted in a simple format. It is N-X wherein N is the signaling rate (expressed in Mb/s) and X is a unique identifier. For example, the 100Base-X is a 100Mb transmission speed cable with the X value meaning several things. The 100Base-X is the golden standard when it comes to running 100-Megabit Ethernet through 2 pairs. Now, the reason why these cables are twisted is that when electric signals are sent through two wires next to each other, as is the case within a cable, it can cause interferences referred to as “crosstalk.” The way we mitigate this is by twisting these two wires as if they were one. This is the most common type of cable that you’ll find today. This is for the following reasons: They are quite cheap, which lends themselves to widespread business use. They’re quite simple to work with, having an easy to understand architecture. It still has a fast transmission rate. Any given UTP cable will belong into one of the following categories: Two twisted wire pairs- This is the most aged type of UTP, and is scarcely used today. It is not used to transmit lots of data, and is also referred to as plain old telephone service, or POTS. While this kind of UTP was standard before 1983, today it is rarely used, and although it exists in some places still, it is swiftly getting replaced. This category is usually abbreviated as Cat. These days, you’ll be hard pressed to find any cables older than Cat 5e. Four twisted wire pairs- Coming up to a total of 8 wires, this cabling can handle 4Mbps on a frequency of up to 10MHz. Much like cat1, this is another kind of cabling you’ll be hard pressed to find a use for in the modern world. Four twisted wire pairs (3 twists per ft)- While very similar to cat2, the amount of twists leads these cables to handle higher frequencies up to 16MHz. This is yet another obsolete kind of cable. Four twisted wire pairs (improved): These use frequencies of up to 20MHz.

Cat4 but improved: These are much like cat4 cables, but can handle up to 100Mhz. 5e. 5e Represents the cat5 cabling, except it was enhanced. While it is also capable of handling 100MHz, it is much better at handling disturbance, which is crucial for the era of Gigabit Ethernet. Today, you’ll seldom find any kind of cabling under cat5 getting used anywhere. 6. Four twisted wire pairs: Cat6 became standard practice in June 2002, and while you would usually use it as a riser cable to bring two floors together, newer buildings are often exclusively using cat6 and fiber cables.

UTP Cable Connecting UTP cables can be a bit of a hassle to connect together. BNC connectors aren’t exactly suitable for connecting them. Because of this, you’ll have to use a RJ, or Registered Jack connector. This is the same kind of connector that most telephones are connected with. RJ-45 is one of the best ones to use for this. If you’re looking to achieve data rates above 10Mbps through a UTP cable, then you should ensure that all of the components are rated to deliver this kind of output, and you need to be very careful when handling the components. Try to be careful with the cables themselves. If you end up yanking on your Cat 5e cable, then that will lessen the number of twists within the jacket. This will, in turn, render the 5e label on the cable invalid. You need to make sure that you’re doing every step of the way properly. With that being said, today it’s common to use two pairs of four wires as the standard for Gigabit Ethernet. If you want to achieve that over UTP, then that’ll require all of your cables to be in excellent condition. Always ensure that you’re using rated components from end to end, every step of the way. This is because not using the proper components, or simply using ones that don’t match the Cat 5e specifications, then you won’t have a Cat 5e certified installation.

Fiber-Optic Cables Fiber-optic cables transmit digital signals through light rather than wiring. They’re the new kid on the block when it comes to cabling, however, their advantages are plentiful and rather obvious. Because of this, they’re slowly becoming more and more popular. Rather than using electric impulses to transmit data, optic cables use light itself to do this. This makes them immune to RFI and EMI, making them quite a bit easier to use. This is especially true for more rough implementations, such as elevator shafts and similar. Fiber-optic cables are usually based around carrying light impulses on glass/plastic. These are then coddled with glass or plastic cladding, the key here is for the cladding to have a different refraction index to the core. All of this is then surrounded with a stronger, plastic buffer. This is usually made out of Kevlar. But we aren’t done, this whole construction is then surrounded with PVC or plenum.

For these cables, there are two modes: SFM (single-mode) or MMF (multi-mode.) These two are differentiated based upon the No. of light rays that they carry. The multimode fibers are generally most useful for communication on a short distance, while single-mode fibers are often used when it comes to long-distance data transfer. With that being said, they also have other pros and cons much like every other cable type. Pros: Immune to EMI and RFI, making them an ideal choice where problems of that sort are commonplace. Very friendly towards long distances, they can easily transmit data over 40km. Cons: Fiber-optic cables are quite difficult to install compared to more common types. Due to having more materials, and a more complex construction, they cost more than most alternatives. If a problem does occur, it’ll be harder to troubleshoot, and even the equipment itself will be more expensive. SMF cables are very quick, and are ideal for long-distance media transfer which consists of 1-2 strands of fiberglass. The LEDs and laser within the cables are the two other crucial elements that are critical for the end-to-end communication. This type of cable is used for extra-long distances due to its ability to transmit data over 50 times farther than multimode fiber can (and regular cabling doesn’t even compare to that.) Now, because a crucial element in these cables is glass, they can be rather difficult to install. While there are layers that are protecting the core itself, the cable itself can still go awry if it’s too pinched, or goes around a corner that’s too tight. When it comes to the question of connectors, there’s quite a few types of connectors that you can use to connect fiber-optic cables. The two that have garnered the most support are the ST (straight-tip) and SC (Subscriber connector.) The ST connectors are among the most used fiber-optic connectors. These use a BNC attachment mechanism to make the experience of connecting and disconnecting as free of frustrations as possible. This is their main selling point, and if you’re a lover of convenience, they’d be ideal for you. The SC connector is a different kind of connector. Here, the connectors are latched with a mechanism that grabs onto the connector and prevents it from falling off. These will work for either single or multi-mode optical fibers. Their duration is usually around 1000 matings. While these are getting used more and more due to their increased security, they still aren’t quite at the popularity of ST connectors.

Optic or Fiber? Deciding to use optic or copper cables is a very important decision to make. Thankfully, there are quite a few pointers you can turn to, to decide which one it’ll be.

The first criteria is the length of your data runs. In case you’re measuring them with miles, then using fiber optic cables is a no-brainer. This is simply because with copper you get at most 1500ft before you have to regenerate the signal. If you’re using UPT, then you’ll be down to just 228ft. Fiber is also ideal if you need a lot of security. This is because fiber doesn’t make a magnetic field that can be measured. Although fiber optic used to be extremely expensive, today, it’s gotten a lot cheaper. On the other hand, if you don’t need long distances to be covered, then copper is your best bet, as there’s no reason to spend the extra money getting it there. Naturally, if you don’t need superior security, there’s also no need to shell out on the more expensive, fiber technology. Finally, if you need an installation in a difficult to reach area that has a lot of clutter, twists, and turns, then copper is your best bet again. The simplest kind of topology is a Point-to-Point topology. The reason why it is the simplest is the fact that it represents the direct connection between exactly two routers. This gives you one communication path for the transfer of data. The two routers in this kind of topology can be linked in several ways. This can be done via serial cable. This makes the topology into a physical network. On the other hand the connection can be established via a circuit inside of a frame relay network. This is also known as a logical network. Different symbols are used for different kinds of elements inside of a connection in the industry. It would be a good idea to get to know them and learn how actually to use them. In topologies like these there are a few tiny nuances that you should always be aware of. Point-to-point networks are extremely common. They are used in most WANs that you can see across the world today. Something as basic as a link between a hub/switch and a computer is a kind of pointto-point connection. A very common version of a point-to-point network will be made out of a wireless link between two wireless bridges. It is used to connect two different computers that are a relatively short distance away from one another. A point-to-multipoint topology is a more complex network topology. As the name might suggest, it consists of several destination routers and an interface on one separate router. The separate router is connected to the others and interacts with them without them interacting with one another. To put it more simply, imagine an office building. The corporate office would be the main router where the interface is located, while the branch offices would be the “multipoint” part of the topology. Every single one of the routers and their interfaces which are connected in the topology are parts of one network. A few other examples of a point-to-multipoint network are a college or corporate campus. A hybrid topology is a more complex matter to talk about. The name is pretty self-explanatory, it is a topology that is composed out of two or more different kinds of logical or physical topologies inside of one network.

The Importance of Topology Selection Knowing what kind of topologies exist is only half the battle. When making a network you should know what kind of topology suits it best. Just picking one and going with it won’t cut it. There are several things that you need to consider. In the previous examples the specific topology was chosen for a reason. Inside of an office environment a point-to-point network just wouldn’t work. It would be very inefficient as a single office cell would require a network of its own. This would be very costly and inefficient. It would hurt the office’s budget greatly. Thus a point-to-multipoint connection was chosen. Similarly if you are looking to set up a home network, a hybrid topology would be unnecessarily complex. While yes, the network would do its job, there would be too much room for errors and it would take too much time and money to set up. A point-to-point topology would do the job much better for much less of a cost. It’s not only a thing of cost efficiency. Sometimes a more complex network than the one in mind would be necessary. This means that before setting up a network, proper knowledge of the benefits and downsides of each topology will do you good. In an incredibly vast array of different topologies it is pretty difficult to stay informed. When push comes to shove it comes down to asking the right questions. First of all you need to know how much money you are willing to spend. You next need to decide how much fault tolerance you need. On top of that, you need to recognize that every network grows quickly. This means that network maintenance is a huge part of owning a network. This means that another thing you need to keep in mind is how easy the network is to configure, how scalable it is. For example, you might want to design a nice and cheap network that requires only a few computers inside of a single room, you will probably want to go about it by getting a wireless access point and a few wireless network cards. This means that you save a lot of money on cabling and a lot of effort with the set-up. On the other hand if you are trying your hand at making a solid design for a large company that keeps growing, you are going to opt for some strange version of a star topology. Why? Because a quickly growing business will want to be able to edit their network as often as possible and a star topology will allow this to be done very quickly. This is where star topologies shine the most. This is the kind of topology that is the easiest to edit and make bigger. In a star topology every change that you make will probably be cheap and quick to execute. To cut to the chase, here are the parameters that you need to watch out for when defining your own network: ●

Ease of installation



Cost



Maintenance



Fault-tolerance

What is the Network Backbone? As you might already know, any network that you may encounter in computing today can be very complicated. This is why we have a way of intelligibly communicating with one another while

managing to pinpoint which part of the network we are talking about. This is why today’s networks are divided into different parts that are called segments and backbones. If you need to visualize the backbone of a network look no further than your own. The backbone of the network is a part that is connected to all the other segments of the network. It is what all the servers are connected to and what keeps the network whole. As you might have imagined, considering how important it is to the network, it must rely on some extremely fast and robust technology. You are correct. More often than not it relies on Gigabit Ethernet. If you want to optimize the performance of your network, you must connect every part of your network to the backbone. A segment, on the other hand, is a term used for any small section in the network that isn’t a part of the backbone, but can be connected to it. Every workstation in a network is connected to the servers, which are connected to the backbone.

Hardware in Topologies People often disregard the quality of the hardware that will host your network. They often say that it is just a set of cables that holds it together. They usually lean towards cutting corners here expecting not to lose anything. This, however, is the wrong way to approach this. Whenever you are building a network, you should build it from the bottom up. Let’s say that you are setting up a network back home and you spent quite a pretty penny already. Every device you placed there is state-of the art and well maintained. Now, why would you ruin something like that by using some horrible-quality cables. It’s not just about the look and the feel either. If you cut down on your budget for hardware, you will often see a drastic decline in quality over time. More often than not, the difference will be hugely noticeable. When providing physical media for your network, you should aim to get the best that you can afford. If it malfunctions due to you cutting corners, you are in big trouble. Especially when we are talking about the backbone of your network. If your hardware does malfunction, it will definitely mean you are losing loads of time and money. You could lose a lot of precious data and even be forced to dig it up yourself. Paying a tad bit extra for cabling can save you some money in the long run. Network downtime can spell disaster for bigger companies.

Topologies in CompTIA Network+ The CompTIA Network+ exams hold a large emphasis on your knowledge of networks. That is why you need to be very well acquainted with the key terminology of the subject. This chapter teaches you what a network should look like and what the components of the network should be. Another crucial piece of information that you should take in is that components are not the only thing that is required to build a network. You also need to choose the proper connection method that will optimize your network of the job you want to give it. Recognizing topologies is a very important skill in any IT work. With the many different kinds of topologies come many features and drawbacks. It is your job to know them all and the knowledge can not only help you with your exam, but will often lend itself to you when you actually start doing IT on a professional level.

In short, here you have learnt the following which is essential for acing the test: Knowledge of network topologies. It is not rare for a question to require you to know their names and descriptions. Remember that physical and logical networks are very different in nature. The advantages and disadvantages of each topology over one another. If you know what each topology brings to the table you will have an easier time with troubleshooting and problem solving.

Chapter 4 Ethernet Specifications

Ethernet is something that pops up in real life almost as much as it does in IT. You can often hear it being mentioned here and there, but what does it actually mean? Well, to put it simply, ethernet is a connection method that lets every single host on a network share the bandwidth of the same link. That might not sound very special, but first you should examine why it is so popular. Its popularity comes from the fact that it scales extremely well. This means that it has a very easy time implementing some new technologies like Gigabit Ethernet and Fast Ethernet. Another thing that it is recognized for is how easy of a method it is to implement in the first place. Not only that, but it is very easy to troubleshoot. It operates on both Physical and Data Link layer specifications. We will expand more on Ethernet and its specifics within this chapter.

Elements of Ethernet A collision domain is a term that pops up quite often when talking about Ethernet. A collision domain is a very specific network scenario during which a device sends out a package on a network segment. What this does is force every other device within that segment to pay attention to what the device sent out. This might seem like no big deal, however can be quite a problem. This is because this forces the devices that are transmitting at the same time will force the devices to retransmit later. This is called a collision event. A collision event is any event during which two devices interfere with one another via digital signal. You want to avoid collisions because they can negatively impact the performance of your network. This example is something that you will often find in hub environments. In a hub environment each host segment is connected to a hub. Every hub represents one collision domain and one broadcast domain. A broadcast domain is a term used for a group of devices composed from every device on a single segment that can hear any broadcast which is sent on that segment. Broadcast domains are often limited by physical media, usually via switches and repeaters. However, it can also be used to reference a part of a logical network segment. In this case it would be a segment in which every host can reach every other host through the Data Link layer. Another term that you might often encounter in Ethernet networking is Carrier Sense Multiple Access with Collision Detection. You would think that with such a hefty name that there would be little room for nuance, but you would be wrong. The CSMA/CD is a protocol which aims towards avoiding collisions while different hosts transmit packets. It helps every functioning device on the Ethernet share the same bandwidth while not interrupting one another. The cruciality of proper collision management cannot be underestimated. In a CSMA/CD network,

every host on the network can examine the transmission that another host is sending. Bridges and routers are, therefore, very important for the network as they are the only way to keep a transmission from leaking through the whole network. So, how does the protocol work? The protocol is activated whenever a host wants to use the network for a transmission. The first thing that the protocol does is search for any digital signal on the wire. If the check is clear, or to be more precise, no host is transmitting, the host is allowed to proceed with the transmission. This is not where the protocol ends, though. The host that is performing the transmission constantly checks the wire to make sure that there is no other transmission happening at the same time. If the host finds that there is another signal being transmitted, it will send out a jam signal. This will cause every other host within the segment to stop sending data. This will cause the hosts to pause whatever they are doing and will attempt to transmit again. A set of back off algorithms are used to determine when each transmission can continue. If a collision occurs after 15 different tries, every host involved in the collisions will be prevented from transmitting. The process of collision solving goes as follows: A jam signal is transmitted. This lets the hosts know that a collision occurred. A random back off algorithm is triggered. Every device within the segment is interrupted and their transmissions are stopped for a while. Every host has the same priority once their timers are expired. Collision solving with CSMA/CD in heavy collision goes as follows: Delay Lessened throughput Congestion

Half and Full-Duplex Ethernet The most essential difference between full and half duplex is that half duplex is unable to both transmit and receive at the same time. In essence, half duplex is much like full duplex, with their only practical difference being that it is unable to both receive and transmit at once, though it is perfectly able to do both at different times. Running half duplex means that you are using only one wire pair with a single signal that is either transmitting or receiving. This works because when the host receives a digital signal, then it sets the CSMA or CD to act. This is done to ensure that no collisions occur, and to ascertain that the signal is transmitted once again if there is a collision. When it comes to half-duplex internet, it’s generally no more than 40% efficient

because of the intrinsic limitation of 10Base-T networks providing 4Mbps or less. Sure, 100Mbps Ethernet can be run on half duplex, however, this is an extremely rare occurrence. On the opposing end, full duplex Ethernet uses both pairs of wires at once. This advantage is obvious when we compare the hardware differences between the two. Atop this, full duplex utilizes single point to point connections. This connects the sending device’s transmitter with the receiver of the receiving device. This has dual meanings: you will attain faster transfer speeds, in addition to having superior collision prevention. This makes full duplex the superior choice in situations where this is relevant. You don’t really need to consider collisions when you’re thinking about full duplex. Think about a freeway, and imagine it as your network. A freeway has many lanes, and every transmission only takes its own lane, and nobody crosses with anyone else. This is much more safe from collision than half duplex, which would be the equivalent of a one lane road. This difference, although minor at first glance, can become massive. The point of full duplex is to give you the maximum efficiency in either direction. It allows you to use 10Mbps Ethernet to get 20Mbps. If we consider Fast Ethernet, then we see these speeds skyrocket up to 200Mbps. While this is what generally occurs, it is important to remember that the world of networking offers no guarantees. A full-duplex Ethernet network can be used in so many different situations. You can use it to connect a host to a switch, a switch to a switch, or even a host to another host using a crossover cable. This is only limited by the fact that it cannot be run on hubs. Now, seeing all these advantages, it can be easy to make the mistake of considering full duplex to be absolutely superior in all situations. Unfortunately, nothing is ever as simple as it seems. Whenever you turn on a full duplex Ethernet port, it starts out with connecting to the remote end. Afterwards, it begins negotiating together with the opposing end on the Fast Ethernet link. This mechanism, most often referred to as auto-detect is put into place so that you can see what kind of exchange capabilities can be allocated. To further simplify this-it shows you the speeds that you can run on. After this is done, it sees if you can run on duplex. If not, then it automatically starts to run on half duplex. Once you power on a host things go right out of your hands. They auto-detect what duplex type and Mbps you can run on. This can, however, be changed on your network interface card. These days, people rarely go into an NIC configuration on hosts to change these settings. It is something that you might want to avoid unless you know exactly what you are doing. Always remember that with half-duplex Ethernet provides a lower throughput than full-duplex. The difference is quite large, to be honest. This and the fact that it shares a collision domain are pretty big drawbacks. If you are trying to run a full duplex Ethernet always remember the following: Full duplex has no collisions Each full duplex host needs a dedicated switch port The network card and port need to be compatible with full duplex to run

The Data Link Layer The Data Link layer is where Ethernet holds responsibility for addressing. This is often referred to as MAC addressing or hardware addressing. This is also where Ethernet frames packets which it receives from the Network layers and gets them into a transmittable shape. Every MAC address is a hexadecimal address. And is burned into each Ethernet NIC. The MAC address is a 48-bit address which is composed out of an OUI (organizationally unique identifier), the Global/Local bit, the Individual/Group bit, and 24 vendor assigned bits. The Institute of Electrical and Electronics Engineers is in charge of assigning the OUI to each organization. The OUI is made up of 24 bits. The organization then, in turn, selects a 24 bit address which is usually unique to every adapter it makes. The address has a very small chance of being the same as that of another organization, but it can happen. The last two bits are the I/G and G/L. When the I/G has a value of 0, it is safe to assume that it is an address that is used by a device and will appear in the source portion of the header. When the value is 1, it is safe to assume that it is being used by either a multicast or broadcast address. The Final bit is the G/L bit, also known as the U/L bit (U meaning universal). When the value of that bit is 0 it means that a globally administered address uses the address. When the value of the bit is 1 it usually means that the address is locally governed and administered. The low-order 24 bits within the address are a code that is assigned either locally or a manufacturer. It is usually composed from 24 0s and continues ticking until the address racks up to 24 1s, though this will only happen on the 16.777.216th card. Many manufacturers make it a habit of having the same six digits which they use as the last six on their serial numbers. To roll it back to the Data Link layer, it has another very important responsibility. It is in charge of computing bits into bytes and bytes into frames. Frames are used to capture packets which come from the network layer. These packets are transmitted later on via any kind of physical media access.

Ethernet Frames Ethernet stations are used for the transfer of data frames from one to another by using the MAC frame format. The MAC frame format is a group of bits that have this one specific task as a purpose. This creates a cyclic redundancy check (CRC) which is a form of error detection. Pay close to that sentence. It offers no correction of the errors, rather it just points them out.

The Physical Layer The first Ethernet LAN specification was made by a group called DIX, which stands for Digital, Intel, and Xerox. They were, in fact, the first group to ever have implemented Ethernet. The IEEE used the LAN specification to make the 802.3 Committee. The network made by DIX was a 10Mbps network. It ran on coax. It later switched to twisted-pair and ended up on fiber physical media. The 802.3 Committee was extended into two new committees. These were dubbed 802.3u which governed Fast Ethernet and 802.3ab which governed Gigabit Ethernet in Category 5+. The last committee that came as an extension of 802.3 is 802.3ae which governed 10Gbps over coax and fiber.

This, if nothing else, showed how important it is to know the different kinds of Ethernet which exist. While it might seem like an amazing plan to run Gigabit Ethernet on every desktop and 10Gbps between every switch, the network would cost way too much. While this is something that we are steadily going towards, today it is just a cost that can hardly be justified by any company in the world. What companies do instead is mix and match the different methods to come up with a cost-effective solution that suits their needs. The EIA (Electronic Industries Association) and the newer TIA (Telecommunications Industry Alliance) is the standards body that governs the creation of the Physical layer specifications for Ethernet. It specifies that Ethernet should use an RJ (Registered Jack) connector that has a 4 5 wiring sequence on UTP (unshielded twisted pair) cabling. By the standards of the industry this is just called an 8-pin modular connector. Every Ethernet cable that the EIA/TIA specified has a property called inherent attenuation. This represents the loss of signal depending on the distance it traveled through the cables. Inherent attenuation is measured in dB. By this standard, the cabling that you can find in different markets is separated into different categories. The lower the attenuation of a cable, the higher the quality, thus the higher the category. For example a Category 4 cable is better than a Category 2 cable. This is because it possesses more wire twists in each foot of the cable. This means that there is less crosstalk meaning that the signal is cleaner.

Ethernet in the CompTIA Network+ Exam So what are the fundamentals that should be picked up from this chapter? Which parts of this chapter will you be able to use during the exam or during your career? First of all you learned the base fundamentals of Ethernet. These are fundamentals for a reason. While they are minute details that will rarely pop-up, your understanding of networking needs to be built from the ground up. This will leave much less room for gaps in your knowledge and makes sure that if you get the chance, you can apply the fundamentals. We also touched on how hosts communicate on the Ethernet and how important uninterrupted communication is. If you are building a smaller system that is not of great importance, this is something that you do not have to worry about too much. However for large corporations that rely on Ethernet, one error can be a source of great loss. We have discussed at great length the differences between half-duplex and full-duplex Ethernet and how important it is to know the difference. This can pop-up on a PBQ question and even some of your multiple-choice ones. On top of that it is useful applicable knowledge that you might want somewhere down the line. It might pop up as a part of your work or even as a hobby. It is knowledge that will not go to waste. We discussed hexadecimal addressing as well which is quite a niche subject, but is a potential subject that can be covered by the CompTIA Network+. This is also a great way to freshen up your knowledge of numerical systems. It is something that rarely pops up on the field, but you never know when you are going to need what you learned in high-school. We finished by talking about the importance of proper cabling and different kinds of cables. This is

perhaps the most important subject we covered in regards to the actual exam. It would be wise for you to pay extremely close attention to this subject as it is often a part of PBQ questions. Even if you see little of it on your exam, it is good knowledge which you will be able to apply very often on the field.

Properties of Cables There is a reason why we use so many different kinds of cables in networks. This is because every kind of cable has its own properties that are unique to it. To make the best use of an area in your system or equip it for its use the best, you need to know what kinds of properties each cable has. Different kinds of cables can have a different duplex, distance, speed, frequency, and noise immunity. Let’s expand more on that, shall we? Transmission Speeds Network administrators can control the speed of the network to meet its traffic demands. How do they do this? They do this based on the type of fiber or cable that their network is running on. Usually, admins aim to have speeds of up to 10Gbps or even higher within the most important areas of the network. Typically, this is much lower in the distribution and access areas. It would be fairly redundant to talk about why having high speeds is important. Distance Often, before deciding what kind of cable you want to use, you want to take a look at the distance between the components of a network and the general topology. Some technologies run much further than the norm and do so without any communication errors. However, all networking technologies are prone to attenuation. What is attenuation? Attenuation is a term used for the degradation of signal. This is usually due to the medium used, as well as the distance that the signal needs to travel. Attenuation is a big part of selecting cables, as some have very specific limitations. For example, the maximum length of a segment using twisted-pair cables should be no longer than 100 meters. Duplex As you may already know, all communication is either full-duplex or half-duplex. We will be covering this a bit later in the book, but what’s important for you to know right now is that the difference is mostly based on if your network can listen and talk at the same time. A half-duplex is communication where a device can either send or receive communication. An example of this would be a walkie-talkie. Whenever you press the button, you are allowed to speak and the speaker on your device is turned off. Full-duplex does not suffer from said limitation. In full-duplex, both devices can receive and send a communication at the same time. This makes it much more efficient. Full-duplex has become a sort of a golden standard in communication technologies. Magnetic current is created whenever electrons are pushed through two wires that are situated next to one another. This is very important since it is what allows us to use computers. However, the downside is that this process makes two communication issues happen.

The first issue is the fact that the wire can be tapped. This is a common expression that you might often hear in movies. However, it is a real security issue this day. Because the wire is sending the communication as a stream of 1s and 0s. This means that this message can be intercepted, read, and deciphered without the wire being cut or the insulation around it being removed. In the not-so-distant past, important structures like the Pentagon used to encase their wiring in lead or similar materials to prevent them from being tapped by unwanted individuals. STP wires are there to make the tapping of a communication line much harder to do. The sad thing is that they do not make it impossible. The best solution to potential tapping comes in the form of fiberoptic cables. The magnetic-flux problem becomes much less of a problem when there is no wiring to tap. This means that fiber-optics are much more secure on that front. They are, however, still not impossible to tap. Tapping can be done on an equipment level. If you ever want to tap them you have to actually go into the wiring and cut the table and repair it afterward. This is something that you are likely to notice so that you can react more efficiently. The second issue of electrical wiring does not come from the outside, rather inside out. In the presence of magnetism, wires can often take on additional currents. What this means is that you are going to have to be especially careful when laying down your cables. This can be done by keeping your copper cables as far away as you can from any powerful source of magnetism: motors, speakers, fluorescent light balls, amplifiers, and similar devices. Keep them away as much as you can. Frequency Frequency is yet another attribute that every piece of cable has. Every piece of cabling has a specified maximum frequency. Your maximum frequency dictates how much transmission bandwidth the cable can handle. The Cat 5e cable was determined to have a maximum frequency of 100MHz. This means that for short distances it can run 1Gbps signals. That is the bare maximum that it can handle, however this maxes Cat 5e perfect for connecting desktops that you want running at high speeds. Cat 6, on the other hand, maxes out at 250 MHz, meaning that running 1Gbps is an easy task. Cat 6 has thicker cables and more twists when compared to Cat 5e. This makes it the optimal cable to use when connecting the floor on a single building. Remember that the signal itself is measured as bandwidth, while frequency is the capacity to carry a signal. Wiring Standards We will be discussing Ethernet more later in the book; however here we will talk a bit about the cabling. It is extremely important to understand it if you want to run any sort of LAN network. There are a lot of different wiring standards that are available in the world: 586B - crossover cable Hardware loopback Rolled cable (rollover)

586A - straight-through cable Below we will discuss the specific of each one of these and we will give a few examples. 568A vs. 568B If you open up a network cable and inspect what’s inside, you will find four pairs of wires which are twisted together, This is done to prevent EMI, crosstalk, as well as tapping. The same pins inside a cable need to be connected to the same colors through a network to be able to receive and transmit. This, however, begs the question of how do you decide which color you will use for what. I have some good news for you. You do not have to decide this. Well, not completely. Two wiring standards are agreed upon by over 60 vendors worldwide. Simply, put, over the past several years most of the network jacks have been pinned as a 586A standard or 586B standard. This can cause quite a bit of confusion when you start networking and do not know what to expect. Now you might be wondering why the difference is important and what it even is. The difference is extremely small, even though it is quite crucial. It all comes down to the position of the four wires on one side of the cable. That is all there is to it. Every UTP cable has eight wires. Whenever you are installing cabling, you need to make sure that all eight pins are connected by using Cat 5e or Cat 6. VoIP (voice over IP) makes use of all of the eight pins. In today’s networks it is very common to see both voice and data being transmitted on the same wire. To cut it short, the only pins that you need to connect are 1,2,3, and 6. If you want to use the 586A standard, you need to connect the green-white, green, orange, and orange-white wires to pins 1,2,6, and 3 respectively. You need to do this on both sides of the cable. If you do this, you have created the straight-through cable that is used for most networks around the world as a patch cable. A 586B is created when you switch around pin 1 and 3 and 2 and 6 respectively. This is called a crossover cable. Straight-Through Cable A straight-through cable is needed when you want to connect a host to a hub or a switch, and when you want to connect a router to a switch or a hub. Simple enough, right? In this kind of cable, four wires are used for the connection of Ethernet devices. This is fairly simple to do. You connect the pins that have matching numbers. That is honestly all there is to it. However, what you should remember is that this is a 10/100 Ethernet-only cable. This means that it has several limitations. It cannot work with 1000Mbps Ethernet, Token Ring, or voice. Crossover Cable As we mentioned before, the crossover cable uses the same four wires as a straight-through cable does. Similarly to straight-through cables, you just need to connect the pins properly. Crossover cables are used to do the following:

Connect hub to hub Connect switch to switch Connect host to host Connect router direct to host Connect hub to switch To reiterate, to make a crossover cable you need to connect 1 to 3 and 2 to 6, instead of 1 to 1 and 2 to 2 etc. Crossover cables are only used in Ethernet UTP installations. It can be used to connect two different NICs to one another or a server and workstation NIC. We cannot stress it enough how important it is to label your cables correctly. If someone you work with takes a crossover cable and tries to use it as a workstation patch cable, their workstation will be cut off from the hub and network. Crossover cabling is very important for any IT work. You should always carry it in your tool bag when you are out working. At the very least, you can use it to check if the NIC of a server is working as it should by connecting directly to it with your trusty crossover cable. This will most likely allow you to log into the server if both the NICs are correctly configured. Before trying this, you should use a cable tester to check if what you have is in fact a crossover cable. While on the subject of cable testers, you should have one at your disposal if possible. You can use it to check if there are problems with your cables which can be extremely important. This little tool can be a great ally in any job regarding networks that you might have, Rollover Cable Usually, you won’t see roller cables get used for hooking up Ethernet connections with one another. Their use sometimes lies in connecting hosts to the router console serial com port. With that being said, if you own a switch or a router, this kind of cable can come in handy for connecting your machine which runs on HyperTerminal or a similar program. The cable has 8 wires within, and they are used to connect different serial devices, although not all of them are capable of sending information. This kind of cable is probably the easiest to make due to the process involving you cutting the end of one side on a straight-through cable and putting a new connector on it. It really is as simple as that. Hardware Loopback When it comes to wiring, it’s rather difficult to talk about loopback non-intrusively. This is because loopback these days is less of a wiring standard, and much more simply a way to redirect data flow. If you want a machine to think that it has a live network connection even though it doesn’t, loopback becomes extremely useful. This is usually done when testing before you install a live network. When you encounter these cases, you’ll want the machine to notice its input and output by itself. Loopback operates much like a crossover cable, with the main difference being that it connects the receive pins to the transmit pins directly. NIC software diagnostics utilize this feature to test out

transmission and reception ability. Here, you must memorize that all NIC tests need you to use one of these, and will not be completely accurate otherwise.

Installing Wiring Distributions At this point, I assume you are becoming aware of the fact that there are a lot more components in networks than you might assume at first glance, if you did not know it already. This is something that becomes very apparent to you once you have installed your first network, but it does not hurt to know it in advance. If you have participated in installing a network, or are currently participating, you are already aware of the components that go into the process. You most likely have experience with checking if your components are doing what they need to do and are installed and tested properly. If you do not have said experience, pay close attention as we are going in-depth. It might seem as if there is a lot to it, at first glance and, to be honest, there is, but with careful study and planning, you’ll get in the hang of it in no time. Vertical and Horizontal Cross-Connects A cross-connect is where the cable elements in the system terminate and are reconnected. To put it more simply, a cross-connect is the place where all of your wires come together. Horizontal cables are cables that go from communications closets to wall outlets. The “horizontal” part of the name comes from the fact that they are usually used when the connected elements are on the same floor of a building. Vertical cables are the backbone cables that connect telecommunications rooms, equipment rooms, or any other groupings of termination points. As you may have already assumed, the “vertical” part of the name comes from the fact that these cables usually go from floor to floor. Easy to remember, right? Be it as it may, all of these cables end up connected to one another once the cabling of a single building is finished. There is no exact formula for installations, its size depends on what the organization needs and the architecture of the building in which it is being installed. For example a single Cat-3 horizontal cross-connect is 100 meters, meaning that anything longer than that will require some adjustments. Patch Panels A wall-mounted structured or rack that houses cable connections is what we usually call a patch panel. It usually plugs into the front side, while the punched-down connection of a more permanent, the longer cable is being held in the back. The patch panel has a very important and specific purpose. It is there to allow the administrator to have a non-grief-ridden platform for changing the path of a signal quickly in case of need. Patch panels are very useful if your cables receive some damage or start failing. The administrator can simply “patch around” the dead area by simply establishing a connection between two panels. 66 Block

A 66 block was a very common kind of patch panel used in the past. They are now considered legacy equipment, they still serve some use and, more important for you, pop up as an objective of the CompTIA Network+ exam. They are generally put out of use due to their bulky size. Smaller, newer models are used instead of them world-wide. Another flaw that they had was the fact that they had small capacity, only hosting up to 25 pairs. The thing that cemented the downfall of 66 blocks was the fact that they were limited to 10 Mbps networks. While they did manage to fulfill their purpose when that was the golden standard, it is just not an acceptable drawback today. MDF/IDF The MDF or the main distribution frame is a term used for a wiring point that is usually used as a reference point for telephone lines. The pre-wiring phase of building an object is where the MDF is installed, as well as the internal lines that connect to it. Once that is done, it is connected to the external lines, thus completing the circuit. Another wireframe, called the IDF or the intermediate distribution frame, is located inside the telecommunications room of the building. It is directly connected with the MDF. Its purpose is to provide the room with greater flexibility when it comes to the distribution of communications. It is preferably a sturdy metal rack that is specifically designed to hold the backbone and cross-connect cables that stretch through the whole building. 25 Pair As you might assume from the name itself, a 25-pair cable is made out of no more and no less than 25 wires inside of the same insulating jacket. Its main use comes in the form of data cabling and telephone cabling. It is also usually used for cables that are used in the backbone and cross-connects because it greatly helps with cable clutter. This type of cable is also called a feeder cable because it can supply many connected pairs with signals. 100 Pair As you might have already guessed, the 100 pair cable is one huge insulated cable that holds 100 pairs of wires. This bulky piece of cabling is mostly used in huge telephone companies that have very bulky installations. It is also used in aerial installation and is even sometimes buried in duct-type installations that go through different floors of buildings. The difficulty comes from the fact that there are a lot of colors that look similar when you are connecting it on your own. Do it with great caution. 110 Block A 110 block is a relatively new type of wiring distribution point. It has already replaced most telephone wire installations and is also seeing a lot of use in computer networking. One side of the block holds punched down wires, while the other side holds RJ-11 and RJ-45 connections for phone and network purposes respectively. 110 blocks come in different sizes, each holding anywhere from 25 to more than 500 wire pairs. They can even carry 1Gbps connection when paired up with Category 6 cables. The problem with using

Cat 6 is the fact that it becomes very difficult due to the size of the Cat 6 wiring. Demarc/Demarc Extension Demarc, also known as demarcation, is the last bit of responsibility that the service provider holds. The demarc is sometimes placed inside of the MDF of your building, however it is usually just another RJ-45 jack that your CSU/DSU (channel service unit/data service unit) connects into from your router to WAN connections. During troubleshooting, usually both sides of the demarc are tested for connectivity to see if a problem within the demarc is external or internal. A demarc extension is a term we use to refer to the length of fiber and copper that begins with the demarc but does not quite reach your office. Smart Jack A smart jack, also known as a NID or network interface device, also known as a network interface unit, is a special kind of network interface that is usually used between the internal network and the service provider’s network. PSTN currently owns it and it is impossible to test a demark physically without a NID. The device might also come equipped with a protocol and code conversion, making the signal of the service provided receivable by any device on the internal network.

Verifying Correct Wiring Installation While it is fairly easy to say that not double checking every decision you make is a healthy way to go about life, it is probably wise to assume that you have made some. The same can be said with wiring installation. It is easy to get overly confident, especially if your installation is aesthetically pleasing. It is also not rare for people to cut corners when doing documentation, leaving the records with false and incomplete information. If you want to do it professionally and by the book, you must check the connectivity of each cable and record your findings in detail. This information should include when the cable was tested and what results the test yielded. There are a plethora of things that can go wrong in cabling. Placing a copper cable near a magnetic source, there is bound to be some malfunctions. When pulling a cable through a corner or a tight space, you can rip out cable jackets. You can damage them by extending them more than you should, as well. This is especially true for fiber-optic cables which are even more fragile than regular ones. Treat every cable with caution. The best companies that handle the installation of cables check everything from top to bottom several times. If you are doing the installation yourself, do the same. Think of it like this: you have a customer that is paying you a lot of money to install cables for them. You do not want to gamble the money or the trust away since you want your business to bloom. As they say, measure once cut twice. The moral of the story is that you can never be too sure and that if you think you haven't made any mistake at first, you definitely have. Verifying Proper Wiring Termination

Wiring termination is another area where so many things can go wrong. Just terminating copper wires can cause a lot of problems in so many ways. If you are not properly trained, all you need to do to pull a cable properly is a bit of time and patience. This is true most of the time, however not always. Terminating copper cables to a punch-down block is quite difficult. It takes quite a bit of practice and experience before you can do it properly consistently. You can easily avoid this by making sure that all your wires are terminated properly in the right order. The termination of fiber-optic cables is especially tricky. It requires some pretty expensive training and equipment to be done properly. You want your installers to spend hours of work when removing fiber-optic cables since doing it improperly can cause you some problems upon the installation of new cables. I cannot stress this enough: test your connectivity as often as you can. Cables in CompTIA+ Network It might be easy to underestimate the importance of cabling in both IT and the CompTIA+ Network exam, trust me, it would be unwise to do so. IT isn’t all about fancy digital work. It takes some elbow grease to stay on top of the game. On top of that, the CompTIA Network+ exam takes quite a bit of time to test your knowledge on cables. The exam wants to check if you are ready to start doing the job and that includes the more physical parts. Understanding the properties of each cable and wire can be quite crucial, especially when you are working on the installation of said cables. CompTIA does not let you make any excuses either. Knowing the little details that go into the process of cable selection and cable installation will make you a very efficient and effective technician. And we all know what that means. It means more job options and more money. The knowledge that you need to absorb here and take to your exam is very varied. Due to certain similarities and tiny differences it might be hard to absorb it all effectively but I will try to summarize it as best as I can. You need to understand what kinds of cables can be found in today’s networks. Focus on remembering the difference between coaxial, twisted-pair, and fiber optics and how they are different from one another. You also need to understand the various types of ends that are compatible with each different type of cable. For example Coax cables use the BNC end, while twisted-pair uses either RJ-11 or RJ-45 depending on its function. On the other hand, fiber can use many different kinds of ends depending on the function. Another important thing that you need to learn is the difference between 586A and 586B. You need to learn where they are used and what they allow you to do, as well as what they do not.

Chapter 5 Internet Protocol (IP)

The Internet Protocol (IP), also known as the Transmission Control Protocol (TCP) was created by the Department of Defense (DoD). It is a suite that is created to ensure and preserve the integrity of data. Its second purpose is maintaining communications in the case of a catastrophic war happening. What this means for you as a networker is that upon proper implementation of TCP/IP, you will be left with a network that is both dependable and resilient. TCP/IP has quite a few things that hide behind what meets the eye and that’s what we will be focusing on during this chapter. Since TCP/IP is so fundamental for working with intranets and the Internet, you need to understand the minute details about it. The roots of TCP/IP can be traced back all the way to April 1969 when the very first RFC (Request for Comments) was published. As you might already know, this paved the way for each and every protocol that happens on the Internet. Every single one of them is specified in the editions of RFC that have been published. All of them are being maintained by the IETF (Internet Engineering Task Force). TCP/IP first came to light in 1973. In 1978, the protocol was split into two different protocols, namely TCP and IP. Finally in 1983, it replaced the NCP after being authorized as the official means of data transport. Most of the development that happened regarding TCP/IP happened at UC Berkeley in Northern California. As you might have grown to expect in the world of IT, the small group of enthusiasts turned into a very impactful movement. This went so far that the US government created a new program that aimed to test any newly published standards and made sure that they matched certain criteria. This was done to protect the integrity of the TCP/IP and to make sure that no developer made any changes that were too large or added any proprietary features. This very approach to the TCP/IP was perhaps it’s best quality. The openness is what made the family of protocols so popular. It guaranteed a solid connection to many different platforms.

TCP/IP and the DoD Model What the DoD model is can most easily be described as a simplified version of the OSI model. This can best be seen from the fact that it has four layers instead of the regular seven. The four layers are: The Process Layer, also known as the Application Layer The Host-to-Host Layer The Internet Layer The Network Access Layer

Other than this, the two models are relatively similar. When we talk about the different protocols in the IP stack, you will start to see a lot of similarities between the two models. For example the Network and the Internet layer from OSI and TCP/IP are exactly the same. Furthermore, the Host-toHost layer and Transport layer are also used to describe the same thing. The other two layers of the DoD model are made up of several layers from the OSI model. There is a staggering amount of different protocols that combine at the Process/Application layer of the DoD model to integrate the different duties and activities that happen on OSI’s corresponding top three layers, namely Application, Presentation, and Session. The Process/Application layer is in charge of defining the protocols that regard node-to-node application communication. It serves a second purpose of managing user-interface specifications. The Host-to-Host layer of the DoD is equal to the Transport layer of OSI. Its primary function is to define protocols that set up the transmission service levels of applications. It makes sure to fill the roles of creating very sturdy and reliable end-to-end communication. It also works to ensure the error-free delivery of data. On top of all that it deals with handling the sequencing of packets and data integrity maintenance. The Internet layer of the DoD is equivalent to the Network layer of the OSI. It handles the designation of protocols regarding the transmission of packets over the entire network. It addresses the hosts on a network by giving them IP addresses and directs packets among multiple networks. The bottom layer of the DoD model, also known as the Network Access layer is in charge of monitoring the data exchange between the network and the host. It serves the same function as a combination of the Data Link and Physical layers of the OSI. It is in charge of overseeing hardware addressing and also serves to define the protocols that handle the physical transmission of data.

The Process/Application Layer Protocols Telnet Telnet is one of the more interesting protocols that we can take a look at. If I were to find an appropriate comparison, it would be to a chameleon. In essence, it serves to emulate a terminal, and allows even remote machines (which we’ll call Telnet clients,) to access any given resource on any other machine within the server. This is done by fooling the server to think that the machine is a terminal attached to a local network. This is called a software shell-when a terminal becomes able to interact with remote hosts. Telnet can cope with a vast variety of procedures. Some of these are rather simple, while others are complex. The terminals that it emulates are usually stuck in text mode, however, they can do things such as showing you different menus and letting you pick between different options. When you want to start a Telnet session, you need to run the Telnet client software first. When you’re done with this, you can log into the server. Going further to make this simple, Telnet by itself doesn’t provide you with any security, so the security has to be handled by an SSH or Secure Shell across the whole session. File Transfer Protocol (FTP)

The FTP protocol lets you get files into any place in the IP network. It can connect any duet of machines that use it. While FTP is indeed a protocol, it is not limited to this. It is much closer to a whole program than a standard protocol. As a protocol, FTP has widespread use, being utilized in a variety of different apps. As a program, FTP is instead used by the users to do different file tasks. Another excellent feature is enabling you to interact with both files and directories. It can also do several directory operations, such as reloading files within different directories. Together with Telnet, you can get onto an FTP server with discretion, and then transferring your files safely becomes a piece of cake. With that being said, it isn’t quite that simply. While it would be nice if all you need to do were get to a host through FTP, it isn’t. When you are finished with that, you’ll be faced with an authentication session which is usually insured with usernames and passwords given by the admin. This is obviously done to help users restrict who can access their files. You can skip all of this by using the username “anonymous”, however your access to certain areas will be fairly limited. While FTP is versatile in its use, it doesn’t sport too many functions. Usually, FTP will just get used to type the contents of a file, or to copy files between different hosts. FTP cannot launch remote programs, nor can it open remote files. Secure File Transfer Protocol (SFTP) The SFTP is used whenever you want to transfer files via an encrypted connection. It is similar to FTP in the way that it is used to transfer files from one computer to another on an IP network. Where SFTP shines is the Safe part. It protects the data that you are transferring without you taking any unnecessary extra steps. Trivial File Transfer Protocol (TFTP) The TFTP might sound like the FTP, that’s because it’s essentially a simplified version of it. While it might sound dumb to make a simplified version of an already simple protocol, in this case it pays out dividends. While when compared to FTP, TFTP features quite a few fewer features, and it cannot browse directories, it is a ton faster and easier to use. Even though many people think it can only send and receive files, it can do much more. It can skim through data, or transport small blocks of it with ease. With that being said, TFTP isn’t the most secure of protocols. If you have any sensitive files, you don’t want to be using TFTP to send them through, as this is the reason why it is so scarcely supported. When should FTP be used? Let’s say, for example, that an office in another town needs a 50 MB file sent to them as soon as possible. What is the best course of action for you to take? You could try emailing it, but the problem with that is the fact that most email services have size limits and the 50 MB can be a bit too much. Even when they do not have the size limit in place, it is a relatively big file to be sent. It would take a lot of time.

This is where FTP comes into action. Whenever you want to send or receive a large file, that is when you want to use FTP. When we are talking about small files, emailing them will do you fine, but FTP is your pocket choice if that goes down the drain. To use FTP, you need to take a few steps, however. First of all you are going to have to set up an FTP server so that you can actually share the files that you want. This might seem like a bit of a drag, but once you do so, you can use FTP instead of your email for everything since it is much faster. Another great thing about FTP is the fact that the session you are in never dies. This means that you can always pick up where you left off. Isn’t that great? Network File System (NFS) NFS is often considered to be a jewel of file sharing. This protocol allows different kinds of file systems to operate with one another. It works somewhat like this. Let us say that an NFS server is running on an NT server and that the client is running on UNIX as a host. What NFS does is allow a part of the RAM on the NT server to store UNIX files transparently. This, in turn, lets UNIX users access said files. Despite the UNIX and NT file systems being very different, NFS allows both the UNIX and NT users to access the same file in their normal ways. Simple Mail Transfer Protocol (SMTP) The SMTP answers our call to email. It uses a queued method of mail delivery. Whenever a message is sent to a destination, it is spooled to a device, most of the time, a disk. The software of the destination then posts a vigil and regularly checks for messages. When a message is detected it is delivered to its destination. SMTP sends mail and POP3 receives it. Post Office Protocol (POP) POP is in charge of giving you a storage area for the incoming mail that you might receive. The recent version is POP3. How the protocol works is when a client device connects to its server, the messages which are sent to the client become available for download. This does not allow the client to choose which messages are downloaded, rather, once they are your interaction with the server ends and you can do whatever you want with the messages. Recently a newer standard, called the IMAP, has been replacing POP3. Let’s find out why. Internet Message Access Protocol, Version 4 (IMAP4) IMAP4 has one distinct advantage over POP3. It lets you pick and choose how you download your mail. What this does is it helps you reduce the clutter you get from unwanted messages. It also gives you some more security as you can peak into a message and see if you want to download it. This helps you avoid viruses and unwanted software that you might have been at the risk of automatically downloading via POP3. Another great function it has is that it lets you sort your email on the server however you want, as well as link them to individuals and groups. IMAP also boasts great search commands that you can use to find whatever messages you are interested in if you know any part of the header, subject, or contents. On top of all of this, it has incredibly strict authentication features. It supports the Kerberos scheme

developed by MIT, giving you that extra layer of security. Transport Layer Security (TLS) TLS is not the first protocol of its kind. It has its roots deep in SSL (Secure Sockets Layer). Both of them are cryptographic protocols. They are something that you want to use whenever you want to enable a secure online data-transfer activity. Data transfer activities are not as rare as they might sound. A lot of things that you might not be thinking fall into this category do. For example, searching the web is one of them. Sending an instant message too. SIP (VoIP) The Session Initiation Protocol is extremely popular, especially as far as signaling protocols go. It is used to deconstruct and construct multimedia communication sessions. This can refer to both voice and video calls, as well as video conferencing and streaming. RTP (VoIP) The Real-time Transport Protocol is a standard for delivering audio and video via the internet. It was originally intended to be a multicast protocol, it is currently used for unicast as well. It is often used to stream media and for push-to-talk systems. Line Printer Daemon (LPD) The LPD is a protocol designed for printer sharing. When used in conjunction with the Line Printer program it lets you print jobs that can be spooled and sent to the printers in the network via TCP/IP. X Window The X Window was originally designed for client/server operation. Currently, X Window is a protocol used to write client/server applications in conjunction with a GUI. What it aims to do in general, is to allow a client, in this case a program, to run from one computer and display things on another computer within a network. Simple Network Management Protocol (SNMP) The SNMP is used to collect and manipulate valuable information on the network. It polls the devices on a network to gather data. It does so from a management station during intervals that can either be random or fixed. If everything is as it should be, SNMP receives a baseline. A baseline is a report that details the operational traits of a network that can be considered healthy. The main purpose of the protocol is for it to stand watch over the network, working as a watchdog. A network watchdog, also known as an agent, sends an alert whenever it detects that something is amiss. On top of all of this the SNMP simplifies the process of setting up the administration of an internetwork. Secure Shell (SSH) Secure Shell is a protocol that helps you set up a secure Telnet session. This is done by using a standard TCP/IP connection. It is mostly used when you try to log in to other systems, run programs

on other systems, or move files from system to system. All of this is done while it maintains a nice and encrypted connection. Think of it as a replacement for rlogin and rsh and maybe even Telnet soon enough. Who knows. SNMP versions 1, 2, and 3 Over time, versions 1 and 2 of SNMP have become nothing more than obsolete. Not to say that you won’t see them at times, but v1 is pretty bad when compared to the other tools that you can use to replace it. SNMPv2 is a strict improvement, especially when it comes to performance and security. The best thing v2 added to the formula was GETBULK. If you do not know what GETBULK is, it allows a host to retrieve a huge amount of data all at once. Sadly v2 never really caught up with the world at the time. Luckily enough with the release of v3 SNMP quickly became the standard when it comes to both UDP and TCP. It added even more security options to the protocol in the form of encryption, authentication, and message integrity. Remember that when running v1 or v2 that you are not safe from packet sniffers. Hypertext Transfer Protocol (HTTP) The HTTP is responsible for most of what you see on your web browsers. From the flashing graphics to just the links, HTTP is making it all happen. When it was first made it served the single function of establishing communication between browsers and web servers and opening what you click on. Hypertext Transfer Protocol Secure (HTTPS) As you might have assumed from the name, HTTPS is the secure version of HTTP. The upside that it has on regular HTTP is the fact that it arms you with many tools that can help you keep safe while interacting with the web. This is what your web browser uses whenever it needs to fill out a form or sign into anything. It is also used whenever an encrypted HTTP message needs to be sent whenever you make a monetary transaction online. HTTPS and SSH are both used for the encryption of the brackets on your intranet and the internet. Network Time Protocol (NTP) Created by Professor David Mills of the University of Delaware, the NTP is used to synchronize the clock of your computer with the standard time source. While this might not seem like much, it is a luxury that we have gotten used to. It is, however, extremely important as time and date can play a role in the transactions you make online. Your server needs to be in perfect sync with the machines running on it. If you mistime by even a second the whole operation can crash. Network News Transfer Protocol (NNTP) NNTP is what you use to access the Usenet news servers. These servers hold the many message boards that we call newsgroups. As you might already know, the newsgroups represent any interest that any human might have. For example, if you happen to be a classic car enthusiast you will probably find many newsgroups that you can join. The same can be said for any other interest that you might think of.\

The specifications of NNTP are handled by RFC 977. It is extremely complicated to configure any newsreader program. It is due to this fact that many search engines depend on the varied resources that we are provided with. Secure Copy Protocol (SCP) We already talked about how amazing the FTP protocol is. However, it is not your friend if you want the files to be sent securely. This is where SCP comes into play. Its primary function is protecting your sensitive and precious files. It first uses SSH to establish a secure connection between the two hosts that hold the communication. It does so until the transfer is complete. When using SCP, the only person that can receive the files you send is the one you intended to. However, it is usually outdone by SFTP which is used more often. Lightweight Directory Access Protocol (LDAP) Almost every system administrator of any larger network has a sort of directory where they keep track of the resources on their networks. How do they access said directories, though? That is where LDAP lends its hand to you. The protocol itself is a sort of standardization on how you can access your directories. The specifications of its first and second inception are detailed in RFC 1487 and RFC 1777 respectively. The two versions had quite a few glitches, making it so that a third version is required. Its specifications are detailed in RFC 3377. It is the one which is most commonly used these days. Internet Group Management Protocol (IGMP) The IGMP is the protocol that is responsible for managing IP multicast sessions. It does so via IGMP messages which it sends through the network to reveal the multicast landscape. They do so to find which hosts belong to each group. Host machines in an IP network also use the IGMP messages. They do so to join and leave groups and they are also used to track group memberships and multicast streams. Line Printer Remote (LPR) Whenever you want to print and unblended TCP/IP environment you need to use a combination of LPR and LPD (Line Printer Daemon) to do the job properly. LPD handles both printing jobs and printers themselves. LPR, on the other hand, works from the client or sending machine. It is also used to send data from the host machine to the print resource of the network. At the end of the procedure, you get actual printed output. Domain Name Service (DNS) The DNS works towards the resolution of hostnames to their corresponding IP addresses. To be more precise, it deals with Internet names. You do not really have to use DNS. Just typing in the IP address of the network you want to go through works. IP addresses identify hosts within a network or the Internet. The DNS is there to make the process much easier. For example, let’s say that you want to move your web page to another service provider? Your IP address would change. However, no one would have any idea what it changed to.

What DNS does is let you use the domain name to specify an IP address. You can change your IP address as many times as you want, but your domain will not change. What’s more, nobody will see the difference. DNS can resolve a fully qualified domain name. An FQDN represents a hierarchy that can be used to locate a system depending on its domain identifier. If you want to resolve any name, you have to use the FQDN that corresponds to the name or a device like a PC or a router to add the suffix. A very important thing about DNS that you should remember is that it lets you ping a device with an IP address. However if you can do so and not with an FQDN, then you might want to check your DNS configurations. Dynamic Host Configuration Protocol (DHCP)/Bootstrap Protocol (BootP) DHCP is responsible for assigning IP addresses to hosts with the information that it gets from a server. It lets you deal with administrator work much more easily in both small and large networks. DHCP can be used on many different types of hardware as a basis. This includes routers. DHCP is similar to BootP because BootP assigns IP addresses to hosts. However, BootP needs to access the host hardware through its table manually. In that regard DHCP can be considered to be a sort of dynamic BootP. However, you should remember the fact that BootP can be used to send operating systems to hosts so that they can boot them. You cannot do this with DHCP. On the other hand, a DHCP server can provide a lot of information to a host when the host requires an IP address. The following is some of what the DHCP server can provide: The IP address The subnet mask The domain name The default gateway in the form of routers The DNS The WINS information This is only a part of the information that DHCP can provide you with, however they are the most common. Whenever a client wants to receive an IP address, they send out a DHCPDISCOVER message through the second and third layer. The broadcast in layer 2 is made out of a line of F’s in a hexadecimal line. It looks like this: “FF:FF:FF:FF:FF:FF”. In the third layer the broadcast looks like this:”255.255.255.255”. This means that all the hosts and networks will receive the same message. DHCP uses the UDP (User Datagram Protocol) at the Transport Layer, meaning that it is connectionless. Receiving an IP address from a DHCP server comes in the form of a four-step process. 1. A DHCPDiscover message is broadcasted by the client which aims to find a DHCP server

2. The DHCP server that receives the message answers with a unicast DHCP offer message 3. A DHCP request message is sent out by the client which asks for an IP address and most likely some additional information 4. The final step has the server and client exchanging through a DHCP acknowledgment message. Remember that you can, if a DHCP server isn’t available, you can create your own IP information. This is what we call static IP addressing. On the other hand Windows provides you with APIPA (Automatic Private IP Addressing) in all of their operating systems. APIPA lets IP addresses be automatically configured by themselves whenever a DHCP server is not available.

Chapter 6 Protocols In The Remaining Layers

The rest of the layers are as important as the first one, however due to their different functions they have drastically different protocols. The process layer, however, is the home of most protocols, which is why the previous chapter was aimed towards it. This chapter will focus more on the rest of the layers and the protocols that are executed on them.

The Host-to-Host Layer Protocols Host to host layers is there because upper layer apps need to be protected from the rest of the network. It takes all of the upper layer’s data streams and begins preparing all the information that you need to send. There are two notable protocols in this layer: The TCP (Transmission Control Protocol) The UDP (User Datagram Protocol) Transmission Control Protocol (TCP) The TCP protocol borrows sizable chunks of info from an app and then breaks down those chunks into even smaller segments. Afterward, it assigns each of those segments a unique number. This makes it possible for the TCP process of this destination to reconstruct all of them together into the original piece of data that you sent. Once the transmitting host finishes sending said segments, its TCP waits for an acknowledgment signal from the receiving host’s TCP process. If the host’s TCP detects that the data is not complete, the missing segments are then resent. It’s notable that this all occurs only if you’ve created a session first. Whichever device wants to transmit first must make a connection-oriented session together with the peer system. This is usually referred to as a three-way-handshake, however, it can also be called a setup. When this is done, the data itself is finally transferred. When this is complete, there’s a call made to start the termination of the circuit. This is both a rather expensive and complex process. Because the networks of today are so much more reliable than they once were, the reliability which TCP brings to the table is rarely relevant. User Datagram Protocol (UDP) When you compare UDP to TCP, UDP is the more economical one of the pair. Its main purpose and why it is being used over TCP is the fact that it takes up much less bandwidth on your network. It does not jump through all of the hoops that TCP does and lends itself to sending data that does not require secure delivery perfectly. It uses fewer network resources to do so.

There’s a bunch of situations where developers mostly opt to use UDP rather than TCP. These are usually related to SNP. It’s worthwhile to remember that SNMP checks all alerts in real-time. Hence, if you used TCP for every little message, in the end, you would end up wasting tons of bandwidth. You will want to use UDP when the reliability of the transmission is being handled by the Process/Application layer. As NFS handles matters of reliability of its own. This makes TCP extremely impractical. At the end of the day the decision of using TCP or UDP falls to the application developer and not the user. UDP does not care about the order in which the segments of the data arrive at their destination. Once the data is segmented it is simply sent off and UDP just forgets about them. It has no follow-through. It does nothing to check the packets after they are sent. It neither cares for acknowledgment signals. It completely abandons the data. Because of this factor, this protocol is usually referred to as being unreliable. Don’t get me wrong, it isn’t ineffective at all, it’s just not very good at transmitting data reliably. UDP just operates with the assumption that the app which is sending the transmission already ensured its reliability by itself. Always remember UDP is faster, TCP is more secure.

Most Important Ideas of HtH Protocols Now that we have introduced TCP and UDP, we can go on to explain some important concepts regarding the Host-to-Host layers. Now then, we’ve checked out connectionless and connection-oriented protocols in action. So here we’ll be summarizing some of our most important points for you to remember. Port Numbers The biggest similarity between TCP and UDP is that they both utilize port numbers to talk to the upper layers. This is caused by the fact that these protocols are tracking all the different communications that happen at once. The host is in charge of placing the port numbers themselves. Usually, you’ll find that port numbers go from 1024 up. The numbers under 1023 are individually well-known and globally used. The Internet Layer Protocols The Internet layer has two reasons why it exists in the DoD model. The first one is routing and the second one is providing the upper layers with a single network interface. None of the protocols in the other layers have any functions that relate to routing. This complex task belongs to this layer specifically and is extremely important. The second duty of the internet layer is not as important, but is still crucial for the proper function of a network. If there were not for this layer, every application on a network would have to have a hook for every single one of the different Network Access protocols. This would waste a lot of time for every programmer that is working on an application, but it would also lead to every single network platform having its own version of the application. This would be extremely chaotic as it would be almost

impossible for each developer to release the same updates with the same features at the same time. IP prevents this by providing the same network interface for each of the upper-layer protocols. Once this is established it falls to the I{ and the many Network Access protocols to work together properly. Internet Protocol (IP) Everything in the network leads back to the IP. Every protocol in this layer, as well as the other layers use it. Do not forget this. The DoD model is extremely heavily reliant on IP and this is the most important part of the Internet layer. You could go as far as to say that the Internet layer would not have any purpose without it and you would be correct for the most part. The other protocols in the layer are there just to support IP and make sure that it is running as smoothly as possible. IP carries the lot on its back. You could say that IP can see everything in the network. This is because it is aware of all the networks which are interconnected. It can do this because every single machine that can be found on a network have an IP address, Introduction to Internet IP Whenever any packet is sent, the IP looks at the destination address of the said package. Using a routing table, it then decides which is the best path to send the packet on. The protocols on the lowest layer of the DoD model cannot see as far as IP can, meaning that they can only work with local networks. When a device on a network needs to be identified two questions are asked. The first question is which network it is located on. The second one is what the ID of the device on that network is. The first answer can be found in the software address, also known as the logical address. If a network were a city this would be the street of the address. The hardware address answers the second question. In the same city allegory this can be regarded as the mailbox of the address. Every host on every network has a logical ID (IP address). This logical address contains very valuable encoded information. This makes the task of routing much simpler. The Host-to-Host layer sends segments to IP where they are packaged if packaging is necessary. When the packets are sent, the IP on the receiving end also deals with their reassembly. Every packet made by IP also contains the IP addresses of both the recipient and the sender. Every router that receives a packet is tasked with making its own routing decisions which they based on the IP address that is designated as the destination. Internet Control Message Protocol (ICMP) The ICMP works at the Network layer. The IP uses it for many different functions. Firstly, it is a management protocol. On top of that it functions as a messaging service provider for IP. The messages it sends and receives are in the form of IP packets. All ICMP packets share the following characteristics: They provide information about network issues to the host IP datagrams encapsulate them

ICMP is called into action relatively often. It is called on when a router cannot send the IP datagram any further than it already has while it has not reached the designated IP address. IMCP immediately relays this to the sender and advises them of the situation. When one host sends a packet that is designated to reach a second host and the packet fails to reach them, the router of the first host will send a Destination Unreachable message back to the first host. Often people underestimate the value of proper feedback. If there were not for this protocol, any data that you send and it does not reach its destination, you would not be aware of it. This can cause you quite a bit of trouble if the data you are sending is important for your work. IMCP makes it so that you can avoid such trouble by being aware of the fact that something happened along the way and you get the opportunity to correct the error. Another message that IMCP is capable of sending is the Buffer Full message. This happens when the memory buffer of a router that is in charge of receiving incoming transmissions is full. The router will then use ICMP to send out the Buffer Full message and will continue doing so until enough space is made. Every router transmission that a datagram makes is called a hop. Every single datagram has several hops that it can make before stopping. The “Hops” message is transmitted when a datagram reaches its allocated number of hops before it gets to the address which it was designated as the receiving end. Once a datagram reaches the said number of hops, it is deleted by the router. The router that executed the datagram removal is responsible for using ICMP to send a message to the sender that the datagram has been prematurely terminated. Checking the logical or physical connectivity of the machines on your internetwork is called a Ping message. It is another thing that ICMP is in charge of. Traceroute tells you which path a packet takes from the point where it is sent to the point where it is received. This is important with sensitive files as you can know if the files have been intercepted. Ping and Traceroute share the function of verifying address configurations inside of an internetwork. Address Resolution Protocol (ARP) ARP has the relatively simple yet very important role of finding the hardware address of a host from a known IP address. How it works is relatively simple, actually. When any datagram is ready to be sent, the IP first informs a Network Access Protocol of the destination’s hardware address. In the meantime the upper layers have already been notified of the IP address. IP uses ARP to find the hardware address of the destination host in the ARP cache if the IP itself cannot locate it within the ARP. In this sense you can look at ARP as a sort of detective for IP. It goes around the local network and interrogates it via a broadcast. In essence, the broadcast asks the machine that has the IP address to give it its own hardware address. To put it in layman's terms, ARP simply converts the IP address it gets into a hardware address. Reverse Address Resolution Protocol (RARP) An IP machine can sometimes be a diskless machine. What this means is that initially it has no idea

what its IP address is. It however, does know its own MAC address. RARP is what is used when you need to discover the IP address for diskless machines. RARP gets said diskless machine to send you a packet that tells you its MAC address and requests the corresponding IP address. A RARP server, which is the designated machine for the request, responds by informing the diskless machine of its IP address. You have finally identified the machine. RARP uses what it already knows about the MAC address to tie it to the IP address, completing its ID portrait in the meanwhile. Proxy Address Resolution Protocol (Proxy ARP) Hosts typically have only one default gateway configured on any network. What do you do once said gateway goes down? The host won’t immediately start sending to a different functional router automatically that’s for sure. You need to reconfigure the host towards doing so. Proxy ARP does not automate this process. It instead helps you do it. It helps machines in a subnet find other subnets without changing the original routing or even configuring the default gateway. Proxy ARP is completely non-invasive, meaning that it can be added to any one single router on any network and not make a difference in the routing tables of other routers in the network. Proxy ARP does come with a downside, however. It greatly increases the amount of traffic that goes through the network segment you placed it in. The hosts will also be forced to expand their ARP tables to handle all of the new mappings. When you get any Cisco router, remember that Proxy ARP is automatically configured on it. Disable this function if you do not intend to use it. This will save you a lot of trouble in the long run. The last thing that should be said about Proxy ARP is that it isn’t actually a separate protocol. It is more of a service. This service is run by routers most of the time and often on behalf of PCs or other devices. Data Encapsulation Data encapsulation is a process that any amount of data goes through when it is transmitted from a host to another device. During the process the data is wrapped with protocol information at every single layer of the OSI model. Every single one of the host’s layers interacts with its peer in the receiving device. Each layer uses PDUs to exchange information. PDUs or Protocol Data Units contain the control information which is attached to the data at each layer. Each PDU is attached to the data in each layer of the model. It does so by encapsulating it as it passes through. Every PDU has a specific name that depends on the information which is shown by its header. The information of each PDU is read-only at the peer layer of where it was sent from, nowhere else. Once it has been read it is tripped off. Once that is done the data proceeds to pass on to the next layer. Once data is enveloped by PDU is sent to another device it goes through the first layer. There the PDUs provided by the first layer of the host are stripped off and the data proceeds down the layers. Simple enough, right? Well, not exactly.

The process begins with the conversion of upper-layer data. This is done to make the transmission easier. The data then goes down to the Transport layer. There a virtual circuit is set up which connects the transmitting host to a receiving device. It does this by sending a synch packet to said receiving device. The data stream is then broken down into smaller pieces. A PDU is then created and attached to the proper header of the data field. That piece of data is now called a segment. The segments are sequenced. What this means is that you can put the data stream back together within the receiving device and get exactly what was transmitted. Every single one of these segments is then handed over to the Network layer. Here it is addressed and routed through the network. Each segment’s arrival to the correct destination is guaranteed by logical addressing (IP for example). A control header is added to the segment by a Network layer protocol once the Transport layer passes it down. Once this is complete we get what is called a datagram. You should always remember that the Network and Transport layers are responsible for rebuilding the data stream on the receiving end. However, they do not take part in placing PDUs on local network segments, despite it being the only method of getting information to a router or host. The packets are moved from the Network layer to the network medium, it being wireless or cable, by the Data link layer. It serves to encapsulate every single data segment in a frame. The frame’s header holds the address of the destination and source hosts. In case the destination is located on some remote network, the frame is first sent to a router and passed to the destination through the internetwork. When it reaches the designated network, a completely new frame becomes responsible for getting the packet to the designated host. In order for a frame to be placed on a network it must first become a digital signal. Every frame is a group of 1s and 0s and the physical layer turns these digits into a signal. Devices on the same local network read the signal. Every device that receives the signal will synchronize on it and decode the 1s and 0s. At that point the devices build the frames and run a CRC. After that they check the answer of the CRC against the FCS field. If they match the packet is pulled out of the frame and the rest is discarded. De-encapsulation is what we call this process. The extracted packet is then lowered to the Network layer. There the address is checked against that of the receiving device. If they match, the segment is extracted from the packet and the rest is discarded. The segment is then lowered to the Transport layer where the data stream is rebuilt from segments. If everything went alright, it would send a feedback message that it received everything. After all of that it drops the data stream over to the upper-layer application.

Protocols and Layers in CompTIA Network+ Over the last two chapters we went over quite a few subjects. First of all knowing the difference between DoD and OSI is pretty much basic knowledge. It will rarely be important to you when you are actually working, but it will most likely pop up on the CompTIA Network+ exam. Knowing the different layers will do you a lot of good in the Exam. It is rare for an exam to pass

without Layers being a very important subject, spanning a few questions. It is easy to underestimate the difficulty of absorbing the knowledge of layers, however you should focus on it as much as you can. It is easy to get confused in the spur of the moment and mix them up. Do not let yourself get in the position of failing your exam due to such a small yet crucial error. On another note, knowing as much as you can about layers can help you a lot in your IT career, especially if you primarily deal with networks. In-network administration is where you get to work with layers the most and where they are the most important. We spent the majority of the previous two chapters talking about protocols. CompTIA exams always put heavy emphasis on protocols. Why? Because they are a crucial part of computing and networking as we know it today. Knowing about protocols means knowing the very basis of the craft. They are extremely varied and it might be hard to differentiate between them at first, however after some time you will see that they are very intuitive in what they do. Not only that, but knowing them inside out is a skill that you will benefit from in every step of your career. Again, this is especially true if most of the work you do regards networking. To put it more simply, there is not a single good reason out there for you not to know protocols as best as you can. It is an area of expertise that is rarely at the forefront of a problem, but often holds the solution or information that will lead you to the solution.

Chapter 7 Software And Hardware Tools

By now, you might have come to the realization that networking is not a simple thing. In such an environment many different tasks need to be completed on a daily basis or simply when they are necessary. Such special occasions require specialized tools. There are many tools out there and all of them serve a certain purpose. They have become an almost integral part of today’s networking. They come in many different shapes and it will do you good to learn how to use all of them since they are an important objective for CompTIA Network+.

Understanding Network Scanners The term network scanner is quite a broad term. It is used for any tool that can be used to analyze your network. The tools which CompTIA Network+ is interested in, however, are defined much less vaguely. That being said, when it comes to your exam the term is used in regards to the Following: Packet sniffers IDS/IPS (Intrusion Detection System/Intrusion Prevention System) software Port scanners Packet Sniffers Packet sniffers are, as you might have guessed from the name, tools that check every packet inside of a network segment. They come in many different flavors and quite a lot of them are free. An example of this would be Microsoft’s NetMon. It comes in many complex and advanced versions that can be purchased, however the version that you get for free with the Windows Server lets you analyze the traffic of your network communications. The full version, on the other hand, gives you some more advanced features such as the ability to pull up total network utilization or even individual frames. As you can see, it is pretty handy for anyone that wants to do networking. Another great thing about NetMon is the fact that it doubles as a network analyzer. Another great packet sniffer is Wireshark. First of all, it is free and easily accessible. All you need to do is head to the website and download it. The great thing about Wireshark is the fact that it is compatible with a lot of platforms. You can run it on OS X, Unix, or Linux. It is very easy to use and it will capture data on all of your interfaces. This includes any VPN or wireless connection. It will look at all of the traffic that is happening in a segment that you are interested in. There are quite a lot of other sniffers out there, however quite a few of them come with a cost. However, that cost can be worth it as you might find yourself in a situation where any free piece of software works. On the other hand, if you are just running your own network and the free, watered-

down versions work for you, that’s great. Whether they are free or not, all packet sniffers serve the purpose of collecting every packet that is captured and analyzes them to determine if there are any problems on the network. It can help you find things like retransmissions, security breaches, and bottlenecks. Every network administrator should have a good packet sniffer at hand as they are an incredible troubleshooting tool. For example, it can help you protect your user’s personal data. People using applications with passwords and usernames being sent unencrypted towards the rest of the network. They can also help you check if there is any segment that has too much traffic or detect defective NICs. What you should remember is that you can’t use sniffers to catch packets while they are going through routers. They can help you find hackers on your network, but in order for them to do this you need to be working hard to spot them. If you want that kind of monitoring you should use an IDS/IPS. This is your bread and butter when it comes to protecting your network from hacker attacks. However, the same work can be done via the more expensive sniffers. You might be under the assumption that packet sniffers provide you with a lot of data. That is very true. Most of the time, quite a lot of that data is useless to you if you are looking for problems. What this means is that you need a way to narrow the data down. Luckily enough most sniffers come with built-in filters that you can use to localize your attempts. Without a filter, to find a problem via sniffer you would have to go through hundreds if not thousands of packets. Intrusion Detection and Prevention Software (IDS/IPS) Continuing the trend of names that tell you what you need to know IDS and IPS serve a common purpose of protecting your network from unwanted entries. IDS comes with the function of detecting any unwanted attempt which aims to manipulate the systems on the network and/or environment. IPS, on the other hand, work towards monitoring system activities on the network to locate any behavior that it considers to be malicious or strange. IPS can even work in real-time to prevent said activities. IDS works on the detection and identification of any unauthorized access and suspicious activity, after which it reports it back to the administrator. IDS is your best friend if you want to identify an attack. However it does nothing towards actually stopping the attack. That’s where IPS comes into play. When IDS reports malicious activity, IDS shuts down ports or drops certain kinds of packets to prevent the attack. There is a vast amount of IDS/IPS software out there and quite a bit of it is free. As you might already have assumed, the best of the bunch comes with a price. The price is often quite steep. However there are certain platforms that are not compatible with most IDS/IPS software and the more expensive counterparts provide them with the service that they desperately need. An example of a popular IDS/IPS software product is Snort. It can run on both Windows and Linux, and it comes with the reasonable price of being free. The key to its popularity is the fact that it is open

source. That, however, is not the only reason. Do not think that it comes without its features just because it is free. Snort is great for small-time network administrators that are running a simple network, however it does not provide you with the firepower to cover large environments. For such cases you have Cisco’s ASA (Adaptive Security Appliance) as the premium solution. It is not exactly free, but it is often worth the price. Something that you should remember when it comes to every software of this type is the fact that the software will always be positioned between the firewall to the outside network and the internal router. If you are using Snort on a Linux you just need to add the software to the Linux box and place the box between the router and the firewall. This area is called the DMZ or the demilitarized zone. The BASE (Basic Analysis and Security Engine) displays and reports any intrusion or attack that is logged in the database. Port Scanners A port scanner is something that any administrator should use. It is a specialized software tool that is designed to search a host for any available open port. Port scanners are very important for the security of your network because hackers can use ports to find the vulnerabilities of your network. Once they find such a vulnerability you can assume what happens next. Of course, you do not want this to happen. A port scan is the process of scanning for any open UDP or TCP port on a host. After this you can use the port for whatever you want practically. You might do this for personal reasons or for business. Anything that might be of interest to you on the network. On the other hand a port scan can be used to find the port and connect to it to abuse it as a breach into the system. Hackers do this to steal information for any nefarious reason they might have. On the other hand a port sweep is the process of scanning several hosts within a network for any TCP or UDP port. Hackers love doing this as it gives them a wide area in which if they find one free port they can attack immediately. For example a hacker can locate an SQL which they can port scan. If they do so the device becomes their playground if it is unprotected. This is why it is best to turn on any devices while they are not doing anything. An example of a free program that helps you deal with such intrusions and their preventions is the Network Mapper, also known as Nmap. One of its uses is that of a port scanner, but it can do so much more. Just like Snort, Nmap is open source. The difference is the fact that Nmap can be run on all platforms and has a port-scanning ability. On top of that it can find firewalls and help with network management. It is a great tool for any network administrator. It is extremely flexible as well, meaning that you can experiment with it and fine-tune it until you find what’s just right for you. Another nice feature it has is the capacity at which it can document things. It also comes with a set of

instructions which will do you a lot of good if you are new to it or just run into a problem that you never have before. The amount of helpful documentation it comes with is staggering as well. Nmap is very easy to use, however if you are in the market for something even simpler, there are quite a few tools that can fill the void out there. Angry IP is one of them. It’ll give you the ability to scan both IPs and Ports. It is also both free and open source so you might want to try your hand at it if this is your first time using such a program. The three groups of software that we have mentioned here are extremely important for any administrator out there. If you are serving the function of a network administrator turning your network into a stable environment that provides users with all that they need is not your only function. Any larger network has a large database of users that are on the network for a plethora of reasons. Be it to consume some content that can be found on the network or for purely work-related issues, their time on your network will more likely than not be very important. These users entrust a lot of their data, as well as their personal information. This is true for any network that has a username/password system in place. The issue with the safety of the information should be at the forethought of your mind. It is imperative for you as a system administrator to keep your network safe. Not only because the information that a hacker may gather this way can be used against the network, it can also be used to harm the users of your network. This in turn will hurt your network solely based on the fact that you will lose the trust of your users. Network security is no small thing. Learning how to use these tools is the responsibility of the administrator. They are also useful to any IT technician as they are usually the ones that pick up the slack when someone else fails.

Chapter 8 Network Troubleshooting

Unfortunately, learning how to troubleshoot networks properly isn’t as simple as learning the multiplication table. Troubleshooting is as much of an art as it is a science. The only way to truly master it, therefore, is by practicing a lot. With that being said, the scientific element of troubleshooting can be learned simply through memorization and understanding. This chapter is dedicated to that part, and we’ll be running through some of the most common issues that you might encounter while doing so.

Narrow Down The Problem Whenever you come face to face with a whole network issue, the feeling can be overwhelming. After all, it’s an entire set of complex issues, some of which are possibly intertwining with each other, and they may be caused by one of 20 different things. While this is true, you can’t let yourself get lost in this feeling. As soon as you’re faced with the problem, you want to start narrowing the problem down. Ask yourself the following questions: Is there an issue with any extremely simple thing? (Is the machine on, plugged in etc.) Is the issue being caused by software or hardware? Is this issue likely to be due to the workstation or server? Taking in the network as a whole, which sections are affected? Is there a problem with the cabling? Have you had the chance to check? The first issue is easy to handle. But let’s consider what the simplest things that you always need to check are. As the common saying goes “All things being equal, the simplest explanation is most likely correct.” Because of this, you shouldn’t be surprised when you get a call and your client is almost fuming at the mouth, because they forgot to check if their workstation was powered on. These issues are commonly left unchecked until the end of the working process. After all, nobody could make a mistake like that, right? You’d be surprised. So, although anyone experienced has an expansive list of “DUH” issues like this, here’s a small sample of what they can be: An issue with the login procedure and/or rights.

An issue with the link and collision lights. A problem with the power switches, adapters, and cords around the workstation. A simple user error. If you set everything up properly, the users of your network will need to follow proper login procedures to access the resources on it. Now, if they don’t follow these they will be denied access. While entering a username and a password without a hitch sounds easy, you will be astounded at the number of people that simply don’t. This is actually one of the most common network issues out therepeople simply typing in the wrong username and password. Another issue that sometimes occurs is when the login times are carefully restricted. This way, if the user spent too much time in the toilet, or tried to log in off of a different workstation, it might not work properly. The lesson to learn here is that if a user can chalk something up to the network instead of themselvesthey will.

Reproducing The Issue This is the most common question I ask whenever I hear of a network problem. It’s a simple “Can you show me what isn’t working?” If you can reproduce the issue consistently, that will let you know exactly when it happens. This, in turn, might lead to you knowing all of the information you require to pinpoint where it’s coming from, or even solve it outright. By far the most difficult problems to properly resolve are those that cannot be easily reproduced, and occur almost randomly. So, let’s go over my work process during every one of these situations: Ensure the username and password are correct. Check if Caps Lock is on. Try logging in through a different workstation, does the issue still persist? If it does, it is likely to be a network issue, and if not, it is more likely to be down to the workstation. If none of these three help, go through the documentation to see whether or not there are concrete restrictions in place that have been violated. The last point is especially important to remember in situations where your network has enabled intruder protection. Because of it, it is likely that after a number of login attempts, they’ll be disabled from logging in at all. In these events, they’ll need to wait up until a given time period has gone through before the account unlocks and gives them a further chance. Now, when you’re checking link lights and collision lights, we’ll have to go into a bit more detail.

The link light is a small LED that you’ll see on the NIC (Network Interface Card) as well as the hub. Generally, it’ll be shining a green light and will be marked with “link” or an abbreviation. In case you’re running 10Base-T, then a link light shows you that the Network Interface Card is making a Data Link Layer connection together with the hub. In case both the link light on the NIC and hub are lit up, then it’s very likely that they’re communicating fine. Now, some Network Interface Cards won’t trigger their LEDs unless the driver is loaded. In case a light isn’t on, then try turning on the system, it might fix all of the issues you’re facing! Collision lights are also small LEDs, though they’re usually amber rather than green. Same as the link lights, they’re found on the NIC and the hub. When these are lit, that means there’s an Ethernet collision in the system. Now, in a busy ethernet network, it’s rather likely that these will start blinking from time to time. On the other hand, if it’s shining 24/7, then you can be sure there’s an issue at hand. Luckily, the most likely issue is simply that the NIC or other devices aren’t working properly. Next, let’s go over power switch issues. You might think that I’m joking when I tell you that I’ve gotten calls somewhere along the lines of “I’ve turned on my computer, but the monitor won’t light up!!” at least 100 times. The first thing you have to do is accept that this will inevitably happen from time to time. The second is to be nice, calm, and collected. Usually, systems will have a power light alerting you whether or not it’s powered on. A device could simply be left powerless because not all the necessary cables have been plugged in. Though you might think that even a child knows this, it is the root of many a “major system failure.” The final trivial issue you could face is simple user error. There’s always a chance that the person who is using the device is simply someone that doesn’t know-how. In the trade, this is generally referred to as operator error, or OE. Now, with that being said, do be aware that there’s always a good chance that there’s an actual network issue afoot. Keep in mind that the well-being of the network ultimately falls on your shoulders. Note that this is by no means an expansive list of simple issues you will no doubt see, but worry not, you’ll extend the list by yourself in due time.

Hardware or Software Issue? Hardware issues will usually be telegraphed, and usually appear when some part of the device setup simply isn’t working. With that being said, that usually comes with a warning. Things rarely fail in an instant. Even if there’s an HDD failure, you’re likely to see a Disk I/O error before that. On the other hand, sometimes that’s exactly what will happen. Everything will function just perfectly for months, maybe even years, then suddenly, they fail. This can leave everything in disarray. Lost data everywhere, files lost left and right. So, what can we do when a hardware issue occurs?

Change the settings of the specific piece of hardware. Update drivers- sometimes this is the root cause of a hardware issue. Replace the dead piece of hardware. In case of total failure, that’s usually a sign that you need to take your tools out and get to replacing the failed parts If you can’t do this, then it’s a good idea to either send the piece of hardware to a repair shop, or to get a new one. Because of this, it is always crucial to back up all of the important parts of a system. Now, despite the fact that every single software guide ever says this, people sometimes still manage to forget this. So, if you want to avoid endless issues with everyone involved in the process, back up everything. All the data, all the files, everything on every HDD, and do it often. Unfortunately, software problems are a bit rougher. On occasion, you might get a GPF (General Protection Fault) message, meaning that a Windows program has encountered an issue. On other occasions, whatever you’re working on will just up and decide it isn’t going to work. Now, there’s an even worse case than this. Where your whole machine just randomly locks up. In these situations, it’s usually a good first step to go over to the support site of your manufacturer and get software updates and patches. If they have a forum, maybe you’ll find someone having a similar issue together with the solution. On occasion, a part of the software you’re working on can become corrupt or go missing. In these cases, the easiest way to solve the issue is either to download the file again, or simply reinstalling the piece of software.

Workstation Or Server Problem? Now, when you’re troubleshooting whether or not the issue stems from the server or a particular workstation, the first thing you’ll need to ensure is whether or not one person, or a group of people is affected. In case it’s just one person, then the chances are that the workstation is giving you grief. If it isn’t the workstation, then generally it’s a segment of the network that lays at fault. The easiest way to ascertain whether it’s the workstation in a situation such as this is just to try to log onto a different workstation and try to reproduce it. If you can’t, then the workstation is almost certainly at fault there. What you should look for if that’s the case are issues with cabling, a faulty NIC, issues with the power and the OS. On the other hand, if there’s a whole department which, for some reason, can’t access the server, then that’s going to be a lot rougher. In these scenarios, the server is much more likely to be guilty. The first thing you’ll need to check is that everyone is properly connected to the server. If they are, are they logged in? There’s also a chance that the issue is individual perms/rights, however, if no one

can log onto it then the fault probably lies in the server’s communication (or lack thereof) with the rest of your network. Another issue you’ll face is server crashes. In these events, you’ll generally either find error messages on the screen, or a completely blank screen-meaning that the server isn’t running anymore.

What Parts Of The Network Are Having Problems? This can be quite hard to figure out, especially when there are multiple segments with issues. There’s a chance that you might be having issues with a network-address conflict. In case you’ve got a TCP/IP, then you need to ensure that IP addresses are unique throughout all of your networks. If you accidentally have two segments with an identical IP subnet address then you’ll wind up with duplicate IP errors. In these cases it can be quite a terror to find the source of your problem. If all the users on the network are having the problem, then the issue could lay with the server itself. If that is the case, thank your lucky stars. Otherwise, you might be looking at issues with the main router or the hub. In these cases network transmissions will be halted until the problem is fixed and it takes a while to fix. If you have WAN connections as well, then this complicates things even further. Here the first step is to get to know if both sides of the WAN link can even communicate. If they can, then get your champagne ready, as you’re one lucky person! Unfortunately, if they can, then you’ve got a dreaded WAN issue. You’d better start checking every part of the whole network starting from the sending station and finishing at the receiving one. Oh, and don’t forget to check the WAN hardware as well! Now, if your WAN device has built-in diagnostics, then they might be able to help you narrow down the issue. If not, the brute force approach is as good of an approach as any. When you figure out whether your suffering comes from a single workstation, segment, or even the whole network, then it’s time to examine the cabling. Is everything properly connected? Sometimes the DSL connection will be all jumbled up, and that’s an easy fix. Check the cables between the station and the wall as well, though these cables are quite durable, they still sometimes deteriorate. If the link-light is getting darker, then there’s a solid chance that there’s a bad patch to blame somewhere, or the walls and ceiling have a faulty cable in them. One of the weirder things that can happen is that workstations can start suffering after dark. This usually happens when someone places a fluorescent light nearby, which produces a lot of EMI, causing issues with the cabling. Next, you’ll want to take a good look at the MDI and MDI-X settings. This is a source of issues that often goes unchecked, however, it’s crucial to the uplinking of switches to your network.

Other Cabling Issues You’ll Want To Know About

Though cabling and other physical issues may seem like trivial matters, understanding them is vital to your performance. Today, most networks are still made up of copper cables, meaning that they suffer from the same issues as networks of old did. Although modern technology has helped significantly, it hasn’t made these issues nonexistent. Crosstalk : Crosstalk is an issue where the signal between two wires that are next to each other bleeds into the other one. To minimize crosstalk, you should twist the wires to be together, or even just bend them to sit at 90-degree angles from each other. The tighter that you’re able to twist them, the less crosstalk there’ll be. Near-End Crosstalk: This is a specific kind of crosstalk coming from EMI bleeding from one wire to others where the current starts. This is where crosstalk is most likely to occur in this era of Cat 6 cabling. Attenuation: When a signal moves through a space or medium, you’ll find that the medium itself makes the signal worse. This is what is known as attenuation, and is present in any network to a degree. To minimize it, know that copper cables are only properly functional at 100m distance without an amplifier or repeater. If you need much more than this, you’re better off using a fiber-optic cable. Collisions: A network collision occurs when two devices are trying to communicate through the same physical area. Though collisions were a massive issue early on in the development of Ethernet, today it is much less of an issue. However, if you do encounter one, check your switches, as well as any MAC errors there may be. Shorts: Short circuits happen when a current goes the wrong way within a circuit, which is usually because of something going awry with the cabling. Thankfully, there’s circuit testing equipment which will help you find them. When you do, simply replacing the cable tends to do the trick. EMI and RFI: EMI and RFI occur when there’s an external signal interfering with the proper functioning of your circuits. T.V.s and radios are especially sensitive to these. Generally, what you should do is either ensure that the offending signal source is no longer present, or using shielded network cables.

General Steps The Network+ troubleshooting model is quite simple, and easy to learn. It essentially outlines all of the steps that you need to be taking while troubleshooting. This simple 9-step algorithm can be repeated ad nauseam until whatever problem you are having is entirely fixed. The steps are as follows: 1. Gather info- here you want to identify all of the symptoms and issues you are having clearly. 2. Find out which areas of your network are having issues.

3. Figure out if there have been any recent changes, as they are often the cause of problems. 4. Find the most likely cause for the issues that are happening, even if it isn’t certainly the right one. 5. Define whether or not you need to escalate the issue past that- did the issue persist? 6. Ensure that you have a concrete plan of action, and a solution to the issue you have identified, figure out if there are any potential side effects to your involvement. 7. Put the plan you’ve made into action and implement the solution. Test it out afterward. 8. Establish the results of the test, as well as the effectiveness of your solution. 9. Document the entirety of this process, so someone that comes in your stead can have an easier time. And then repeat! Now, while this is quite an easy process to follow, you’d be surprised at how many people don’t. The 9th step is especially important, as proper documentation can save you dozens of hours in the future, at the cost of a few hours now. With that being said, you should now be more than competent at figuring out any issues that are plaguing your networks!

Conclusion This is the end. You’ve learned all there is to learn about passing the CompTIA Network+ certification exam. You won’t need any more external resources. You won’t need anything else other than to memorize what we’ve learned. Here, I want to explore three things: A recap of what we’ve learned. My most important piece of advice for taking the exam and beyond. A dive into what next steps you could, and perhaps should take. So, let’s recap, what have we learned in this book? A detailed look at not only what the certification is, but why you would want it, and what benefits it would bring to your career. General exam-taking tips for the CompTIA Network+ exam, which you’ll find practically nowhere else. We delved even deeper into topologies than before, looking at the importance of selecting the right topology, and the differences between different cablings. Ethernet is the backbone of most networks, and you’ve learned it inside and out. Ranging from its most simple elements down to the most complex layers. We went over internet protocols, and what importance they held. We went over a variety of different protocols and the places where you could find them useful. Then we went through the protocols that were outside of the layers we’d considered thus far, learning about encapsulation, ICMP and IP. You learned about a variety of different software and hardware tools that will make your job in IT several times easier. As the saying goes, it is not he who works harder that succeeds, but he who works smarter. Finally, we went over troubleshooting, possibly the most time consuming and intricate part of any IT professional’s career. That’s a ton of information! Each one of those bullet points is a chapter cram-packed with useful info. Don’t worry if you don’t exactly remember all of it, you can just go back and read it again until it’s all clear! Now, onto the advice. It will sound cliché, but honestly, there’s only one piece of advice I found I’ve never regretted following: Do not give up.

Let’s say you fail the exam. Does that mean you give up? Of course not. Maybe you forgot something, misunderstood it, or the tension simply got to you to the point where you were too distracted to work. If you give up, then that’s the end, there’s no more room for growth. There’s no more room for you to give it another try and show everyone you absolutely can do it. It’s important for you to know that working in IT is the same as working with failure as a constant partner. You won’t find an IT professional without dozens upon dozens of failed/canceled projects. And yet, they don’t give up, because they’re confident in themselves. This is what I want from you, I want you to press on despite failure and difficulty, and become a toptier IT professional! Now, you might be wondering, what steps should you take after the CompTIA Network+ certificate. After all, it doesn’t mean you’ve stopped growing, does it? My suggestion is to pursue a CompTIA Cybersecurity Analyst certificate, AKA CySA+. This is a certification rooted in your understanding of the behavioral analytics of networks with the agenda of preventing any threat to them. In essence, it is one of the best stepping stones to being a cybersecurity expert. It’s the only cybersecurity analyst certification that offers high enough standards, together with a variety of different kinds of exam styles. Think of it as a more specialized version of the CompTIA Network+ that’s just a tad harder. This certificate is mostly focused on your capability to find and respond to attacks, but also your understanding of the security and automation applications available to you. Instead of teaching you rote theory, the CySA+ teaches you practical knowledge you could use on your job one day. The course is highly technical, and is up to date with the latest technologies and techniques in the world of cybersecurity. The certification is respected by intel analysts, app security analysts, and compliance analysts alike. It’s even useful in case you want to be a threat hunter or white-hat hacker. Besides that, a different certification might also be good for you. One such as GPEN or similar might open up more varied career pathways. Another thing you might want to do is jump straight into the workforce. Getting employed as a network engineer isn’t half bad of a job. It has good pay and it’s quite a secure position. You could also be a network administrator, or a variety of other positions that are available to you. So, in recap, you could pursue further certifications with CompTIA, go for other certifications, or join the workforce. All of these options are good, so whichever one appeals to you the most is probably the right one for you. Just remember, have fun with what you do and never give up!

Related Documents


More Documents from "thphuongster"