
Motherboards, CPUs, and Add-On Cards
This is a guide on motherboards, cpus, and add-on cards.
Motherboard Form Factors
In this presentation, we'll talk about what's known as the form factor for your motherboard, which effectively refers to its size, shape and default configuration. Now we'll begin with the Advanced Technology Extended or ATX Motherboard, which is rather dated these days, but likely still in use for some desktop models.
But it was introduced in 1995, primarily by the Intel Corporation as a standardized successor to previous models, many of which were proprietary. So the ATX board was to help establish more consistency. Now, that didn't mean that every board had to be identical in terms of features and configuration, but they did have the same physical dimensions, which measured 12 inches high or long by 9.6 inches wide or across, which was a larger size than many earlier models, which in turn allowed for many newer expansion cards that themselves were becoming longer, particularly video cards, which were starting to become much more advanced and as such needed more space for additional processing or even dedicated cooling.
Now, this is a general idea of the overall shape and appearance of a typical ATX board from the perspective of the left side being the back of the computer where all peripheral devices were attached. So the beige square in the upper center would be the socket where the processor was installed and the darker purple slots in the upper right were for the system memory or RAM.
And that's actually important because they were placed there intentionally, because it lined up with the fan output from your power supply, which provided inherent cooling across both the processor and the memory. And since it was a standard size, it would fit into any ATX case, regardless of the manufacturer. The expansion slots are the various length sockets in the lower left, with many ATX motherboards supporting more than one type of interface.
But with respect to the full length cards just mentioned, this refers to the available space to the right of those sockets. I realize, of course, that in this graphic we're only seeing two dimensions, but the expansion cards were inserted upright into those slots. And the components pictured to the right of those slots would themselves just be circuitry on the board itself, so they weren't very tall, if you will, allowing additional space for cards that were considerably longer than just the portion that was inserted into the interface.
Now, despite that standardization, the ATX motherboard did require a fairly large case, but that in itself was a benefit in terms of providing a lot of working space for technicians and to promote airflow and cooling within the case. But it was also a drawback because they were, quite simply, rather large overall, and many workplace environments might not have had adequate space for them. Hence, the mATX is a micro version of the ATX.
Now, it was designed so that it could still be used in a standard ATX case if need be. But it could also be installed into smaller cases when space for the computer as a whole was limited. The mATX board is the same width at 9.6 inches, which is why it could still be used in a standard ATX case, but it wasn't as tall.
In fact, it was actually square, measuring 9.6 inches in both height and width. But to accommodate the shorter height, some expansion slots were sacrificed. But with that, many ATX boards came with integrated components such as audio, video or network, so that fewer expansion cards were required.
In addition, though, with that smaller design other tradeoffs were implemented, including fewer memory slots, and any components that were integrated often weren't of the same quality as dedicated expansion cards. But again, the key factor here was the smaller physical size. So while not as robust, so to speak, mATX systems simply took up less desk space, and they were also a little easier to move around if needed.
Now, there is also an ITX form factor, which stands for Information Technology Extended, which was developed by a company named VIA or V-I-A, who are primarily a chipset manufacturer. So ITX in and of itself is not a single motherboard specification. Rather, it designates a family of motherboards, each with a few different iterations.
But in most cases, the choice came down to the size. So with ITX ports, the overall layout is fairly consistent, but there were four main implementations that were all designed to be even smaller than micro ATX. So the Mini-ITX is 6.7 inches square, the Nano is 4.7 inches square, the Pico is 3.9 inches by 2.8, and the Mobile is only 2.4 inches square.
So as you might imagine, most of these boards were used for very small devices. In many cases, function-specific devices, such as a component port for a device in your home theater like a receiver or a Blu-Ray player. So again, these types of boards typically weren't used in standard computing environments, but you might still encounter them in these specific types of devices.
Motherboard Connector Types
In this video, we'll take a look at several different types of connections that you may find on your motherboard, beginning with expansion slots, which are so named, due to the fact that they can be used to install additional devices into your computer to expand its capabilities.
Now, these days, it's quite common for most of these devices to be integrated directly into the motherboard but there are still many devices that might be required depending on your needs, and in most cases, the integrated components would not be as high in quality as the dedicated expansion devices themselves. Common examples would include video, audio and network cards.
So as an example, the integrated video of the motherboard would likely be fine for standard office work, but it wouldn't be nearly adequate for a 3D graphic designer or a gamer. Likewise, with integrated sound, it would probably be adequate for most of us, but not at all for a recording engineer. So to begin, the first type of expansion slot we'll examine is known as Peripheral Component Interconnect, or PCI.
Now, by today's standards, the PCI architecture is rather outdated, but when it was introduced, it offered a 32-bit data bus, which doubled that of some of its predecessors, and the slots themselves were typically white in color and approximately three inches long. And in terms of power, they provided either 3.3 or 5 volts to the devices that were installed.
So, this graphic represents what PCI slots look like, and though it might not be all that easy to see, there is a divider that separates one side of the slot from the other. And there would be a corresponding notch on the connecting edge of the card, which enabled you to distinguish between the 3.3 and the 5 volt versions, because they were not interchangeable.
The divider on the left side denoted a 3.3 volt device, and it was just about in the same place on the opposite side for a 5 volt device, which simply prevented you from installing a 3.3 volt device into a 5 volt slot, and vice versa. But as mentioned, PCI is rather outdated by today's standards, so its initial successor was PCI-eXtended, or PCI-X, which enhanced the original PCI by significantly increasing the clock frequency up to 533 megahertz in its latest revision, resulting in higher throughput capabilities, up to 1066 megabytes per second in its latest revision.
But PCI-X removed compatibility with 5 volt devices and only supported 3.3 volts, plus it increased the bus with 64 bits. Now, I should mention here that the original PCI architecture did support 64 bits by its latest revision, and there were also some PCI-X devices that still only operated at 32 bits.
But they could be used in a 64-bit PCI-X slot, but of course, they would only operate at 32 bits. But for the most part, it could generally be expected that most PCI-X devices were running at 64 bits. In addition, the name eXtended not only referred to the enhancements, but also to the physical size. The slots themselves were usually still white, but about four centimeters, or about an inch and a half longer.
But that said, PCI-X slots were also notched like the original PCI, and there was another notch at the point where the additional segment was located, meaning that the slots were backward compatible with original PCI devices, because that extra section simply wouldn't be filled if an original PCI device was inserted. But the device would, of course, only work at the original PCI performance specifications.
But even PCI-X is rather outdated by today's standards, so PCI Express or PCIe was introduced to supersede both architectures, as well as the now outdated Accelerated Graphics Port, or AGP architecture, which was typically dedicated for video cards. PCIe implemented several major improvements over all other implementations, in terms of speed and bandwidth, and it remains the standard architecture to this day.
But it does have several revisions which are also referred to as generations, with the most recent being 5.0 at the time of this recording. Now, the slots themselves might come in a variety of colors, but back in the early days of the original PCI, there may have been other architectures present on the same motherboard.
So the white color was almost always used to distinguish PCI from those other architectures. But in just about every modern system, there would only be the PCIe architecture, so the color could be just about anything. But they come in a variety of sizes depending on the type of device being installed, with each size supporting a different number of lane, which would be indicated by the x, followed by the number.
So as you might imagine, a PCIe x1 or by 1 slot supported only a single lane of traffic. PCIe x4 or by 4 had 4 lanes, by 8 had 8 lanes and by 16 had 16 lanes. Now, clearly the higher lane value, the greater the overall throughput.
But some devices simply don't require those higher levels of throughput, so they're designed more so with the space savings in mind because the PCIe by 1 slot is only about an inch long.
That said, with all of the improvements that have been implemented throughout all generations of PCIe, even just the by 1 architecture in PCI 5.0 supports a bandwidth of four gigabytes per second, by 4 supports 16, by 8 supports 32, and by 16 supports 64 gigabytes per second, so exceptionally faster than the original PCI architecture.
Now, a few other things that I'd like to point out with respect to this graphic in particular, the slot on the bottom is, in fact, the original PCI architecture, and even today, it wouldn't be all that uncommon to find this slot on any given motherboard, simply because there are still PCI devices in use. But in terms of compatibility, PCI and PCIe are compatible only in that they can coexist on the same motherboard. The devices themselves would not be interchangeable into each other's slots.
That said, for just PCIe slots themselves devices with the smaller interfaces and therefore those that are meant to be installed into the smaller slots, can in fact be installed into the larger slots if need be. Once installed, the device and the interface will auto-negotiate the maximum number of lanes that can be supported.
So if, for example, a four-lane device is installed into an eight-lane slot, it will still work, but only over four lanes. Next is the riser card, which is typically found in systems where there is very little available space, such as a desktop system using the smaller Micro-ATX motherboard, many of which were designed to lay flat as opposed to standing upright.
This typically allowed for a monitor to be placed directly on top of the computer, while at the same time providing easier access to some of the components, such as the optical drive or front accessible USB connectors, all while taking up less desk space.
But with the computer being in a horizontal or flat position, it wasn't particularly tall, so the internal components were usually accessed by taking off the top cover. But then the inside space was only about three or four inches deep.
So with the motherboard laying flat on its back, if you will, and with expansion cards being designed to sit perpendicularly into those slots, in many cases, the card would simply be too tall for the system, so the riser card was designed to be installed into a single slot like any other expansion card, but the riser card itself was specifically designed to be shorter, and it only had additional expansion slots on itself.
So with the riser card installed perpendicularly to the flat motherboard, and its own expansion slots installed perpendicularly to itself, it allowed for the actual expansion cards to be reoriented so that they were parallel to the motherboard. Now, there may have been a few variations in the number of slots that would fit on a riser card, but typically they would support up to four.
Now, I realize that description might be a little difficult to visualize, so this graphic represents a riser card in its normal orientation, if it were to be installed onto a motherboard that was laying flat. So again, the riser card itself would go into a single socket on the motherboard, then it would be standing upright.
But then each individual expansion card would be inserted horizontally into the slots of the riser card, thus allowing for more cards to be installed in a more limited space. Now, the socket in most cases refers to the central processing unit or CPU. And really, it's just a collection of columns and rows of holes into which the pins of the chip make contact.
Now the four holes in this graphic is just a zoomed-in view. In most CPU sockets, there would be over 100, but in some older systems the processor was held in place only by friction. So you quite literally had to make sure that all of the pins of the processor lined up perfectly with all of the holes of the socket.
Then it was just pressed into place. But as mentioned, there were many rows of pins and they were often made of gold, so they were very soft. If they weren't perfectly lined up when you applied the force, it was very easy to bend the pins and damage the chip.
So to address that, more current systems introduced the zero insertion force or ZIF socket whereby a clamp or some kind of lever was opened, which widened the holes and allowed the chip to be literally dropped into place. Then, when the lever was closed, the holes would close up, so to speak, and the chip was held securely in place.
Most newer systems still implement some form of ZIF socket to at least hold the chip in place, but most newer chips no longer have pins. They're now just flat metal contacts that just sit in the slot with the corresponding contacts, so pins and holes aren't typically used anymore.
Again, that description might not be all that easy to visualize, so this is an image of a ZIF socket in the open position. So there are actually two layers of the socket. As you opened the lever, the top layer would slide slightly over the bottom layer, which opened up more space for the holes. Then, as you closed the lever, the top layer would slide back and exert pressure on the pins to hold the chip in place.
Headers refer to pin headers, which are quite literally small pins sticking straight up from the motherboard for various types of connections, but most commonly for features such as the power, sleep or other function buttons, front panel USB connections or various indicator lights.
Any given motherboard, of course, could end up in any given case, and it's the case that has the power and or the function buttons or those other indicator lights, so typically, what you'll find in a new empty case is a bundle of wires coming from the front panel, with each set of wires terminating with a small plastic cap that has the corresponding holes for the pins on the motherboard. Now the pin assignments and or numbers will vary from board to board, but the appropriate cap must be oriented correctly over the appropriate pins.
For example, if one of the plastic caps has two wires leading into it, you generally have to make sure that pin one goes into socket one or something like the power button might not work. Although of course it's an easy fix if you get it wrong. Simply reverse the orientation. You would almost never damage the connector if it was installed incorrectly, but it wouldn't work.
Again, there could be several places on the motherboard where you might find pin headers, so it's important to consult the documentation for that board to be certain of which pins are to be used for which functions. And depending on the case or other components that may or may not be present, it's not uncommon at all for some pin headers to simply not be in use.
In the example in this graphic, we see pins labeled AAFP, which stands for Analog Audio Front Panel. So these would only be used if there were audio jacks for something like a headphone and or a microphone on the front panel of your case. If your case didn't have them, you simply left them uncovered.
Now, the power connector depicted here does not refer to the power button on the front or the top of your computer case. Rather, this is the main power connector for the motherboard itself, which comes directly from the main power supply.
This particular example is a 20-pin molex connector, and it would fit into a specially keyed socket on the motherboard so that it couldn't be installed in the wrong orientation, and there was a small tab release on the side which clicked into place when fully inserted. Now, while the specifications for each wire are labeled, each one carried either a specific voltage or it was a ground, and the wiring pattern didn't matter from the perspective of a technician.
Everything was pre-wired into the connector itself, and each wire simply provided power to a different subcomponent of the motherboard. So if, for example, you had to replace either the motherboard or the power supply, you simply removed the existing connector as a whole, then reattached it once the new unit was in place.
Now there is one caveat here, though. If you look closely at this example, you'll actually find that there are 24 pins. But the extra four on the bottom are in their own separate connector, and in many cases there was a small track or a groove on the edge of the main connector so that this extra piece could slide on or off that main section.
That extra four-wire connector was usually specifically for the CPU itself, which in some cases was in a separate location from the rest of the main power connector. So if that was the case, you simply separated the smaller connector from the larger, and installed that one separately into its own dedicated connector, usually located fairly close to the CPU itself.
If, however, the main power connector had 24 sockets, you simply left both pieces attached to each other and inserted all of it into that single main socket.
Serial advanced technology attachment or serial ATA or just SATA, are the attachments for your storage devices, such as hard drives or optical drives, and most motherboards will usually come with at least two SATA connectors, but some may have four, six or even more, depending on the make and model. Serial ATA was introduced in the year 2000 and quickly became the successor to the IDE interface of previous hard drives and optical drives.
The SATA connectors themselves were much smaller than their IDE counterparts, only about one centimeter across, and each connector was keyed so that it couldn't be inserted incorrectly, and in most cases, the cables were red but that isn't a standard, and usually the connector for the motherboard had a straight connector and the connector on the drive end was 90 degrees, which simply made it easier to make the connection and have less cables sticking straight out from the back of the drive, although some cables were straight on both ends.
Unlike IDE, only one device could ever be connected on one cable, but because both the connectors and the cables are so much smaller than IDE, it's not uncommon to find several connectors integrated into your motherboard. And if you needed more, such as in a network server, you could install a serial ATA controller card into one of your expansion slots, and add many more.
There could easily be up to eight or ten connectors on a single card. External serial advanced technology attachment, or eSATA is an external interface for serial ATA technologies, which would more commonly be found on laptops or maybe on a port replicator, or perhaps directly on the case of a desktop system, but it simply allowed you to connect serial ATA devices externally as opposed to internally.
Now, this was perhaps a little more common when FireWire 400 and USB 2 were being used, but interfaces such as USB 3 have largely superseded eSATA. The cable itself was essentially the same as regular or internal SATA, but it supported a length of up to two meters, which of course, is far longer than what you would need for connecting internal drives directly to your motherboard.
Finally, the eSATA connectors themselves closely resembled those of internal SATA cables being almost the exact same size, again about a centimeter across, but they're keyed differently, so that internal and external cables are not interchangeable. Ultimately, you're bound to encounter many different types of connections on any given motherboard or case, so particularly if you're considering purchasing a new motherboard, it's important to take note of what you currently have, or at least what you intend to install to ensure that all appropriate connectors are available.
Motherboard CPU Compatibility
In this video, we'll take a look at motherboard compatibility with respect to the CPUs they can support, as well as the physical types of chips and sockets that are commonly found in most systems. Now, for starters, when it comes to CPU manufacturers, while there are others in the marketplace, among the most common are Intel and Advanced Micro Dynamics, or AMD. And in terms of making a choice between those two, it really just comes down to user preference or what you feel might be most beneficial for your needs.
So we aren't going to get into a comparison of the performance characteristics, rather the physical characteristics and the configuration of the chip itself, and or the sockets, so that you'll be familiar with what types of chips can be used in which types of sockets. And if in fact there is any level of compatibility should the processor need to be replaced or upgraded?
So with that said, the first consideration we'll look at is the physical socket type itself, which is quite simply the component into which the CPU is placed. And at the time of this recording, there are three primary types: the land grid array, or LGA, the pin grid array, or PGA, and the ball grid array, or BGA.
Now of these, LGA would likely be the most commonly used in today's systems, but PGA sockets would still be in use. But BGA isn't really a socket at all. Rather, it refers to any type of chip being permanently attached to the motherboard during production, so any type of BGA processor would not be upgradeable or replaceable without getting an entirely new motherboard.
But as such, they might be less expensive, and they do still qualify, for lack of a better word, as a socket type because they do provide a means to house the CPU. But in most cases, a BGA would more likely be found in mobile devices such as tablets or phones. So among the replaceable or upgradeable chips and socket types, the land grid array uses flat metal contacts on the underside of the chip, which then simply rest on the corresponding contacts of the socket, hence the term land of LGA.
You more or less just place the chip into the socket and the contacts of the chip land on the contacts of the socket. Then there's typically some kind of a bracket or a brace that closes over the chip, to hold it securely in place, and the processor itself is keyed with small notches to prevent incorrect orientation.
Pin grid array chips, however, have actual pins on the underside of the chip, which fit into corresponding holes of the socket. And while there are no notches of any kind on the chip itself, the pattern of the pins in the array was configured so that it could also not be installed in the wrong orientation. Most commonly, a pin in one of the corners would not be present, nor would the corresponding hole in the socket. So if the orientation was wrong, it simply wouldn't allow you to insert it.
Now, both Intel and AMD used each type of socket, and many models of their respective CPUs were the same in terms of physical specifications. In other words, there were many motherboards that could have accepted either chip. So it really was up to you in terms of which processor you wanted to use, or if it was just a matter of replacement due to damage, then it was typically a very simple process to remove the old chip and insert the new one, since each type of socket simply held the chip in place, but not permanently like the BGA socket.
Now, among the PGA sockets, in particular, in older motherboards the processor was simply aligned with the socket so that the pins were slightly in their respective holes, but then a compression force was required to fully insert the chip. Now there were tools to assist with this, which helped to ensure that the pressure was evenly distributed, but many technicians would simply apply it by hand.
And if any pin happened to be slightly misaligned, it was very easy to bend or break the pins because they were so small and usually made from gold, which is very soft anyway. So it wasn't uncommon to damage the pins when inserting or removing the processor. So the zero insertion force or ZIF socket was later introduced to alleviate this problem.
A small lever on the side of the socket was raised, which moved a platform slightly to open up the space in the holes, allowing the processor to literally just be dropped into place. Once inserted, the lever was closed, which closed up the holes and exerted pressure on the pins to hold the chip securely in place. With LGA sockets there are no pins, just flat metal contacts.
But they are still considered to be ZIF sockets because there is still no force required to insert the chip. And the socket itself will also implement slightly spring-loaded lands to exert a small amount of pressure onto the contacts of the chip and again, some kind of bracket or brace is still closed over the chip to hold it in place and ensure that the contact between the chip and the socket lands are secure.
Now, while LGA and PGA are two separate types of sockets, they simply define the category of the socket, not the specific make or model. In other words, not all LGA or PGA sockets are the same. Intel will usually define the socket by the number of contacts or pins. For example, an LGA 1155 socket has a total of 1155 contacts, so only a processor with the corresponding number of contacts can be installed into that socket.
AMD, however, typically uses a custom naming structure, such as AM3. However, their associated processors would also be labeled accordingly. But in both cases, any one model of socket may have supported several models of processor.
Now, you should always consult the documentation to be sure, but in general, if the processor could physically fit into the socket, it would work, because changes to the socket configuration simply would not allow you to install an incompatible processor. That said, AMD has been known to create sockets that are compatible with other generations of processors, in which case they would typically label the socket with a plus sign. So the AM3 model just mentioned might be AM3+.
Another consideration is the physical number of sockets that are present. Motherboards commonly designed for network servers or maybe even higher end desktop systems offer multiprocessor models, which of course can dramatically increase the workload that can be handled by that system. Now, while modern processors do have multiple processing cores, that's not the same thing as having two or more distinct physical processors.
So if, for example, your current system has a single quad-core processor, there aren't actually four separate chips. It's just a single chip with more processing capabilities than a single-core processor. But with two physical sockets, you could install a second quad-core processor, effectively doubling your overall processing power.
Now again, this certainly isn't a situation that is overly common in the desktop environment, but with the proliferation of virtual machines these days that may all be running off a single physical host system, many network servers can certainly benefit from the increased capabilities of a multi-processor system.
But if you do find yourself working with one, then the processors you install should be identical in terms of make, model, and specifications. Even if the physical socket might support other models. Mixing them will simply result in an imbalance in processing output. Lastly, another consideration with respect to which processor to choose is whether you're considering a laptop or a desktop computer.
Some models of motherboards for either might support the same physical CPU, but in most cases, laptop processors tend to be designed specifically for laptops, due to the comparatively limited space. As such, features such as the clock speed, the power consumption, available cooling and the number of processing cores are all usually not as high as they are in their desktop counterparts.
Since there is usually significantly more space in a desktop computer, the processor can have a much larger heat sink and fan, so the clock speed and the number of cores can be increased while not overheating. And since desktops never run on battery, higher power levels can be supplied. So quite simply, desktop processors are inherently capable of outperforming their laptop counterparts.
But of course, when portability is the main concern, then a laptop clearly is the right choice. And of course, this all applies to even more mobile devices, such as tablets and phones. Space limitations and power concerns will always be a factor when mobility is needed, and those factors just aren't present with a desktop.
But with that all said, tremendous advances have been made in mobile processors, and in most cases, the CPU itself is not the limiting factor when it comes to overall performance. But as mentioned, before making any purchase, upgrade or replacement, simply make sure of what you're dealing with in terms of the physical socket and the supported makes and models, to ensure that you implement an appropriate and compatible solution.
BIOS/UEFI Settings
In this video, we'll provide an overview of the Basic Input/Output System or the BIOS of your computer and the Unified Extensible Firmware Interface, or UEFI, both of which are among the most important components of your motherboard, because, beginning with the BIOS, it's a chip or a set of chips that contain the most basic system software, which instructs your system as to how to access and interact with the elemental components such as the processor, the memory, and the storage of your system, and how the system is booted to the point where the operating system can take over.
And it's important to note that the BIOS always loads prior to the operating system, and the software itself would be proprietary to the manufacturer of the motherboard, or at least the chipset manufacturer. In other words, the BIOS of any given computer is not aware of which operating system is installed on that device, nor does it care, if you will. The BIOS is a lower level of software than the operating system.
Once it loads and sets up access to the basic hardware, its job is done, and it passes control over the rest of the boot process to whichever operating system is installed. Now, while you may encounter systems with a BIOS still, it has been succeeded by the Unified Extensible Firmware Interface, which performs the same tasks as the BIOS, but it provides more functionality for user configuration, easier updates and support for newer hardware devices.
Now, just quickly on the topic of user configuration, the BIOS does provide you with an interface so that you can configure certain options, such as which types and or size of hard drive might be installed, the clock frequency for the CPU, or maybe the amount of memory, or something as simple as the date and time.
But all you're doing is selecting options from a preset list of whichever options are supported by that BIOS. In other words, you're not installing or configuring new devices such as a network card or a video card. That's done through the operating system with vendor-supplied software and drivers. At the BIOS level there might be some configuration options for the sockets into which those devices are being installed, but the socket might host any type of device.
So again, it's the much lower level software. So BIOS-based systems did provide you with a means to access the configuration interface during the boot process, usually by pressing and holding a specific key during the boot, which would vary dependent on the manufacturer. But BIOS interfaces were usually somewhat limited.
They were almost all just menu-driven with no mouse or graphic functionality. You had to navigate by using the arrow keys on your keyboard and then select from the various options that were available. UEFI-based systems also provide this type of functionality, but we'll get into this and some other advantages of UEFI in just a few minutes.
Now, a close relative of the BIOS, if you will, is the Complementary Metal-Oxide-Semiconductor or CMOS. While you may have heard the two terms used somewhat interchangeably, that's not really the case. The CMOS actually refers to a battery-powered storage area where the BIOS settings are retained. So, for example, if you did access the configuration interface of your BIOS to make a change, those changes need to be saved, of course.
But the settings that were configured were stored in active or volatile memory, which is the same type of memory used in standard RAM, which requires power to retain the data. Now, this really isn't because non-volatile memory didn't exist at the time, or it was too expensive. Rather, the BIOS had to remain active even when the system was entirely powered off, to retain information such as the date and time.
Otherwise, every single time you booted your computer, you would have to reset the clock, so the CMOS simply supplied that power while the system was off.
Another key component of the BIOS is a pre-programmed routine known as the Power-On Self-Test or POST, which is a diagnostic testing sequence that is run every time the system boots to ensure that your hardware is being detected and is working correctly. It will verify access to components such as the storage devices, a functioning display, the processor and the memory, and if any component should fail during the POST, it will report an error and hopefully you can address that problem to try to correct it.
If, for example, one of your memory chips had fallen out or it was inserted incorrectly, it wouldn't be detected. So that would stop the boot process and notify you that the memory could not be detected. In terms of limitations, the BIOS has been around for quite some time since at least or possibly even before the 1980s.
And while it may still be in use today, it would likely be only found in older systems. But given that age, many BIOS-based systems may be incompatible with newer hardware, such as the very large hard drives we use today. Now, as mentioned, it is possible to upgrade your BIOS, but as a specific example, most BIOS-based systems won't boot from a hard drive larger than two terabytes, and many might not recognize the amount of RAM used in modern systems.
In addition, the BIOS processes run in a 16 bit-CPU mode, and those processes only have access to 1 MB of memory, so this results in difficulties when trying to initialize multiple hardware devices at the same time, which in turn slows the boot process down. So then, as mentioned, the BIOS has been succeeded by the UEFI, which again performs all of the same tasks as the BIOS in terms of enumerating and configuring the lower level hardware. But like any improved version of software, it provides several enhancements, including support for larger hard drives and more memory, it processes much more quickly and offers faster boot times, plus enhanced security features to ensure that the settings aren't removed or altered.
And coming back to some of the other features of UEFI that I touched on earlier, it also supports a much more intuitive graphical user interface with support for a mouse. So it's much more user friendly compared to BIOS interfaces, and among its best features, UEFI systems can access your network connection to download updates, which was never an option with a BIOS-based system.
Updating a BIOS required that the update be downloaded separately onto some kind of removable storage, which was then manually rewritten into the system in a process known as flashing the BIOS, which could be rather risky at times because if you flashed in the wrong BIOS update, it might have caused serious damage to the system.
A UEFI system will simply check for updates from the vendor, just like any other software these days, and it will only download the correct and appropriate updates, so it's a much safer process. Lastly, among some of the configuration options that can be set in either the BIOS or the UEFI interface are the method and or the sequence of how the system should boot.
For example, most systems will assume to boot from the storage drive that contains the operating system. But sometimes when performing maintenance, you might need to boot from something else, such as a USB external device. So you might have to reconfigure the system to boot from USB first, because if it's set to boot from the storage device, it will, in most cases, find it.
And when it subsequently finds an operating system, it won't even attempt to boot from the USB device. So you might have to reconfigure the system to boot from USB first, because if it's set to boot from the storage device first, it will, in most cases, find it and subsequently find an operating system, so it won't even attempt to boot from the USB device at that point. You can configure secure boot options, which can help to protect the UEFI from malware or other types of low-level attacks by detecting any unauthorized alterations to bootloaders or key system files, by verifying their digital signatures.
Some options may be as simple as configuring fan speeds and or temperature monitoring options, such as configuring a threshold operating temperature that will generate a warning if the system begins to overheat beyond that point, or you may also be able to set up a boot password or a PIN, whereby the system won't boot at all without entering the password.
In short, the UEFI offers many improvements over the BIOS, and again, I would say that most modern systems would be UEFI based, but there may be some older systems still in use that are BIOS based. And I should also mention that during the transition from BIOS to UEFI, it was possible to configure some systems to support both, but they wouldn't be overly common these days.
What's ultimately most important is that you become familiar with how to access the UEFI or the BIOS interface of the systems you support, as well as which configuration options are available. And this would, in almost every case, come down to the manufacturer, so it's important to have access to the original documentation to ensure that you have the correct information.
BIOS Security
In this presentation, we'll examine some options for configuring security on your BIOS, which in this context does not refer to preventing any kind of malware. Rather, the fact that the BIOS configuration interface allows you to make changes that can seriously affect the performance of your system if incorrectly configured.
So the term security in this context refers to preventing unauthorized users from making changes. Any user with physical access to the computer can attempt to get into the BIOS configuration, so to ensure that those users are not able to make changes, you can typically implement a supervisory or an administrative password that is required to be able to access the interface.
So, unless the user knows that password, they'll be denied access. Now, depending on the vendor, you may also be able to configure a user-level password, and that might allow the user to view the configuration but not make any changes. Or you may be able to configure a subset of values that could be changed, but the value implemented might only come from a preconfigured range of acceptable or approved values, which might be a useful feature for someone who is in training for a technical position.
Now that said, in most, if not all cases, there is no kind of identity that's used here. In other words, you don't log in with a username and a password. You are only prompted for the password regardless of who you are. So for any support technician or administrator who should be able to have full access to the BIOS interface, they should all know the appropriate password.
Likewise, trainees or new hires should all know the user-level password but not the supervisory password. Now that said, it should be pointed out that this kind of configuration is not overly secure because in an environment where there are many support staff, given that the password is attached to the device itself as opposed to your own identity, it's not that uncommon for anyone who knows the password to pass it on to someone else who maybe shouldn't know it, and before long, everyone knows the supervisory password.
But as long as procedures and policies are observed and enforced, it can provide additional security for your systems and reduce support and or helpdesk calls due to misconfigurations. Another component of the BIOS is a feature known as LoJack, which is an anti-theft security package that is embedded into the BIOS itself, which allows the host device to be electronically tracked. Now it's not particularly common in computers, but it's quite common in automobiles so that they can be located if stolen.
But it is becoming more common in various types of mobile computing devices. And finally, Secure Boot is another BIOS security feature, which does, in fact, address malware infections or attempts to tamper with the BIOS programming. Secure Boot ensures that the device is being booted using only a trusted firmware configuration by verifying a digital signature that accompanies the firmware, which signifies that the code is trusted.
Any attempt to alter or tamper with that code will immediately invalidate the signature and the device will not accept the altered code, keeping the original code intact. The feature itself is typically enabled or disabled through the BIOS, which might pose the question as to why anyone would ever want to disable it.
But in some cases, Secure Boot has been known to interfere with the functionality of other devices or services. So it will depend on other factors, but if no problems arise after Secure Boot has been enabled, then it should generally be left in that state. And this, in fact, leads us back to the implementation of a BIOS password.
If in fact you do want Secure Boot enabled, then you don't want other users to be able to get into the BIOS interface to disable it. So again, be sure to protect your passwords if this is the case in your environment.
Motherboard Encryption Features
In this presentation, we'll examine some security and encryption features that are available for a motherboard, starting with a component known as the Trusted Platform Module or TPM, which is a special purpose security chip that was originally implemented as a removable chip, but most newer systems would implement it as an integrated component.
But in either case, its primary purpose is to provide encryption for higher level components, such as the data on your storage device or other services within your operating system. In other words, they typically aren't used to protect the configuration of a BIOS, for example. But regardless of how it might be used, it provides a cryptographic vault where encryption keys are stored, so that without access to those keys, the services protected by them are entirely unavailable.
As an example, one common implementation for Windows-based systems is a service known as BitLocker, which can encrypt the contents of an entire storage device, such as a hard drive.
Now, while features such as the file system of Windows itself can use permissions to restrict access to files, permissions are only effective while that drive is in that device. In other words, if someone were to physically remove the hard drive from a computer, they could attach it to their own computer and then take full control of all permissions on that drive and remove them all, giving themselves full access to all data. BitLocker, however, protects every bit of data on that drive, using the encryption key stored in the cryptographic vault, which again is embedded into the motherboard.
So in short, if the drive is protected by BitLocker, then even if it is removed and installed into another computer, the data will be entirely inaccessible without access to the original encryption keys. Now, you might immediately think that this simply means that an attacker might just have to steal your entire computer to gain access, which is certainly possible and even plausible if you consider something like a laptop.
But for starters, BitLocker can also be used to encrypt the contents of removable storage, such as flash drives, and even on something like a laptop the system can be configured with a password to unlock access to the internal drive before it even boots up. Now that's only one example, and I do want to point out that the TPM itself is a component of the motherboard, not any particular operating system.
So the manner by which any given operating system will use the TPM is up to the vendor, but TPM chips are supported for use by most major operating systems, including Windows, Apple, and Unix or Linux. Other functionality might include verification of other components in the system.
For example, the TPM can check to make sure that the operating system hasn't been reinstalled or compromised in any way, and it can also enumerate other hardware components and compare what gets detected against information previously stored in the chip, to determine if any components have changed or possibly been tampered with.
If there are any discrepancies, the system may not boot, or may at least prompt for confirmation of those changes, which would require a password or other type of authentication to verify that the changes were legitimate. Now, if your system did not come equipped with a TPM, then it most likely cannot have one installed after the fact. Again, almost all newer systems have the TPM implemented as a built-in component of the motherboard.
So if you're considering a new purchase, you should verify if one is present. With a TPM, though, it can also act as a trust manager, meaning that it can be used as the root of trust for other components and services. For example, the TPM itself has a burned-in RSA key, which is the type of encryption key used in many forms of data security.
Not only can this key be used to provide integrity and authentication for the boot process of the host computer, but it can also be used to generate and protect other encryption keys, which can in turn be used to protect other information, such as passwords. You can imagine this as taking a set of actual keys and placing them inside a locked drawer or cabinet, which itself has a key.
So the primary key, if you will, protects the other keys. Now, another method of implementing hardware-based security is through a hardware security module or an HSM. Unlike a TPM, HSMs can be added to most any device at any time because they're separate and external devices that are typically accessed over the network. Although some HSM devices come in the form of an expansion card that can be installed into any system, but it is still removable.
HSMs offer much of the same functionality, though, by providing secure encryption capabilities through the use of RSA keys. But since they are external or at least removable, they can be used in or accessed by many different systems, and may therefore offer a wider variety of services, depending on the operating system being used to access the HSM itself.
In short, TPMs and HSMs provide similar functionality, but HSMs can be used with any system, whereas a TPM is an integrated component. But as a separate component, HSMs must also be purchased separately, so there would be an additional cost.
CPU Architecture
In this video, we'll examine what is perhaps the most fundamental characteristic of your central processing unit, the processing core. So beginning with a single core processor, this quite literally refers to a microprocessor in the form of a single physical chip, and that chip handles all processing of all threads or sets of instructions.
Now, a single core processor can accept multiple threads at one time, but since there is only the one processing core, there can only be one thread that is actually being processed at any given time. The multi-thread capability effectively means that each thread must wait for its turn, but the processing core can at least work on each thread in sequence, if you will, or perhaps in a bit of a round-robin fashion, until all threads have been fully processed.
So just for simple math, if two threads have been accepted by the processing core, it will allocate its processing time 50-50 for each thread, and it will work on both, but again, it can't work on both at exactly the same time. So it will process a little bit of thread one, then a little bit of thread two, then back to thread one, then back to thread two, and it will just keep swapping back and forth until both threads have been fully processed.
Now, the single core CPU is effectively what has been the standard architecture of processors for quite some time. They are effectively the basic design of a category of processors referred to as ARM, which is actually an acronym within an acronym because ARM itself stands for Advanced RISC Machines, but RISC stands for Reduced Instruction Set Computing.
Now this is the opposite of CISC or Complex Instruction Computing. Without diving too deeply into all of that, CISC processors were used in very large systems such as mainframes, and they implemented very long instruction sets that took a long time to process. RISC uses much smaller instruction sets that can be processed much more quickly and much more efficiently, much like an assembly line in a factory can increase its output by having smaller, dedicated tasks performed at each station.
So ARM processing, for all intents and purposes, refers to the processing architecture of the devices we use today, such as desktops, laptops, mobile devices and even modern network servers. Now, the x86 and x64 architecture is a specific implementation of Intel processors, whereby the 86 simply refers to a family of processors and the x denoted the specific model.
Some of you may remember processor names such as the 286, the 386 and the 486, and in fact, even the Pentium processor was going to be named the 586, but it was rebranded for marketing purposes. But these were all 32-bit processors.
Later revisions of Intel processors could all handle 64-bit processing, so the 86 was changed to 64 for more appropriate naming. But in short, all single core processors were part of the ARM or x86/x64 architecture. Now that all said, as we're about to see, there are now multi-core processors, but all of them still fall within the same architecture.
But almost all processors these days would be 64 bit. And even if they aren't Intel, they would likely include something in either the name or perhaps the product specifications to indicate 64-bit capability. Ultimately, almost all modern processors would still fall under the ARM architecture and be capable of processing 64 bits per cycle.
Now, with the ability to at least accept multiple threads, for a single core processor this results in what's known as multitasking, meaning that more than one application can be open and running at the same time, which is of course, something that we all do these days.
For example, when we boot our computers, we probably launch our email application, maybe a browser, perhaps several documents or anything else, and we can launch tasks in all of them if necessary. For example, we might start a download in the browser, then check for new email, launch or join a video conference, print several documents. But at the level of the processor, every thread of every application can be handled by the processor.
But again, not all of them at the exact same time. Only one thread can ever be in the processor, if you will. The rest are simply queued, but each one will receive what's sometimes referred to as a time slice, again, meaning that each thread will receive the attention of the processor, but only for a limited amount of time.
So the processor will simply keep cycling through each thread until they've all been processed. So while the processor is multithreading, at the strictest level it's not really multitasking because no two threads are ever being processed at exactly the same time. So multitasking refers more so to the fact that we, as users, can simply launch multiple applications at the same time.
But again, at the level of the processor, only one thread can ever be processed at any one time. So as you might imagine, the first successor to the single core was the dual core processor, which was designed to improve performance by quite literally adding a second processing core to the chip.
But to be clear, it is still only one physical CPU. In other words, if you were to open up the case and examine the motherboard, you would not see two physical processors in the system. The processing cores are internal components of the single physical chip. So to help visualize that, in this image, we see a dividing line down the center of the processing core.
But again, it's all within one physical chip. And as an analogy for its processing capabilities, imagine that you go into a bank and there is only one wicket open. As such, only one person at a time can be serviced. But then, if a second wicket opens up, clearly the throughput can be doubled. So there are now two people handling the processing, but it's all still within the single bank.
In addition, with two processing cores, now the CPU can truly operate on multiple threads at the same time. Each individual core can still only be working on one thread at any given time, but with two cores, two threads can be processed simultaneously. And in fact, to most operating systems, dual core processors will actually appear as two CPUs in something like a performance monitoring application.
Now, beyond that, there are now other iterations of multiple core processors such as quad-core, and there might even be 8 or 16 or possibly even more, but whatever the core count is that's how many individual threads in each core can be processed simultaneously by that single physical CPU, which of course, results in more throughput overall and better performance.
So just like the dual core processor doubles the throughput of a single core, a quad-core would double that again. But, remember that the core count just refers to the number of processing centers, if you will, not more physical CPUs.
With respect to terminology, a system that does have more than one physical CPU would be referred to as a multi-processor system as opposed to multi-core. The number of cores always refers to just the core count of a single CPU. Now, another method of improving CPU performance is known as hyper-threading, which is also referred to as SMT or Simultaneous Multi-Threading, which is a process that splits any given processing core into two virtual cores, and a virtual core appears to the operating system as a separate logical processing core.
But the key terms here are virtual and logical. In other words, we aren't talking about physically separate processing cores. Now to help visualize that, imagine that you're in a kitchen and you have two separate meals to prepare, and each one has a recipe which represents the set of instructions that must be followed.
So you might begin with recipe one, but at some point, there could be a certain amount of time where you have to wait for some aspect of recipe one to finish before you can move on to something else. For example, water might need to come to a boil, or dough might be left to rise. So, if you decide to just wait out those periods, you are essentially not progressing with that recipe.
In processing terms, you have introduced idle CPU cycles, so the same thing can happen with the threads of an application. Sometimes idle CPU cycles can occur while other operations are happening. So going back to the kitchen and recipe, instead of waiting out the steps in recipe one, you simply start in on recipe two.
Now, you likely can't get both of them done entirely at the same time, but you can swap your attention back and forth any time either one introduces some kind of wait time, thereby making more efficient use of your time and completing both recipes in less time. So in the graphic, there are what's known as processor execution resources, which represent everything necessary to process these threads.
In my kitchen analogy, these would be the ingredients. The processor also contains monitoring components known as the Architecture State or AS, and these monitor for idle cycles and inform the processor to direct its attention to the other threads. So this is what's meant by the virtual core or the logical processing core.
It's still just a single processing core, but since it's taking advantage of idle processes to work on other threads, it appears to be a separate, logical core. Now, architecturally speaking, this isn't that much different than what I mentioned earlier about how even a single core can accept multiple threads, but only one thread can ever be actually processed at one time while others are queued.
But hyper-threading introduces the aspect of taking advantage of idle cycles, so that the CPU is always working on something which further enhances the throughput. So, here is just an example of a Windows-based system with hyper-threading processors. It's a little difficult to see, but the highlighted section indicates that there is one physical socket or actual CPU with 4 processing Cores,
so it's a single quad-core processor. But with virtualization or hyper-threading enabled, it sees each core as two logical processing centers for a total of 8 Logical processors. Now, clearly, this would not provide the same level of throughput as a single 8-core processor would, nor two quad-core processors.
But it does provide better throughput than a single quad-core without hyper-threading. In short, for any system that has multi-core processors, hyper-threading should be used, because when more than one thread is being accepted by the processing core, there are bound to be idle cycles.
So by taking advantage of those cycles, hyper-threading will always produce a greater throughput overall. Now that said, I don't think you'd be able to find any relatively new processor that would not support hyper-threading, but it is a feature that can usually be disabled in the interface of the system UEFI.
But unless you have a specific reason to disable it, such as a particular application that requires it to be disabled or for which hyper-threading is known to cause issues, hyper-threading should generally be enabled. Another feature that you may have encountered more so with older processors, is overclocking, and this quite literally means that the CPU can be configured to run at speeds higher than they're rated or recommended speed.
Now, this is not very common anymore for several reasons, most notably because the speeds are so fast these days compared to the days when overclocking was common, that it just isn't necessary. And perhaps more so because it was never really a recommended strategy anyway, because it could void the warranty and could potentially damage the processor or other system components, because the more you increased the speed, the more heat you would end up generating.
Now, if you had adequate cooling, you could usually get away with it, but the increase in performance would be marginal at best, and possibly not even noticeable for most applications. In older systems the clock speed could be adjusted by repositioning pins on the motherboard, while some newer systems did allow you to adjust it in the interface of the BIOS.
But I reiterate, this practice was more common back when processors and system bus speeds were maybe around 100 to 166 megahertz, whereas modern processors are already 10 or 20 times faster than that. So it just isn't necessary anymore.
Plus, all of these processors at that time were single-core, whereas these days, of course, as we've just seen, there are multi-core processors that are all capable of taking advantage of features such as hyper-threading, so again, I would say it's quite unlikely that you would find any overclocking being implemented these days, simply due to the tremendous increases in performance that modern processors can provide.
CPU Compatibility
In this presentation, we'll compare two of the most popular manufacturers of microprocessors, Advanced Micro Dynamics, or AMD, and Intel. But before getting into any of the specifics, I do want to point out that at the time of this recording, there really is no clear winner, so to speak, at least when you consider the entire market and the entire history of the models that each manufacturer has released.
In other words, there have been clear winners before, but usually only when you compare two very specific types of processor for very particular applications. For example, if you're looking for the best gaming laptop, you're going to have different criteria than someone who is looking for the best desktop to run office applications. So ultimately, there are almost no scenarios where you can simply say that one is always better than the other.
So for your own purposes, I simply suggest a little bit of research for the specific implementation that you need. That said, in general terms, and I stress general, AMD does tend to be popular in the gaming and or hobbyist market, and overall, they tend to be less expensive than Intel processors, but again, that will vary based on the specific model. And while there may have been some legacy motherboards with CPU sockets, that could accommodate either brand, that is almost never the case with modern systems.
So you'll have to take into consideration all of the other features of each motherboard. However, many motherboard manufacturers will offer what is essentially the identical motherboard for each platform. One for Intel and one for AMD. But you cannot put an AMD processor in a motherboard specifically designed for an Intel processor, or vice versa. In terms of the pros and cons for AMD, as mentioned on the plus side they have historically been cheaper than Intel, and they also tend to outperform Intel when it comes to graphics.
And in pure benchmark tests, they are generally better at handling 64-bit applications. On the downside, they tend to run hotter than Intel, so you need to ensure that you have adequate cooling. And even though they may outperform Intel in some specific areas, the speed of the processor is generally slower than Intel. As for Intel, they're the world's largest manufacturer of microprocessors and have been for a very long time.
So quite simply, they're a very reliable and very well-respected provider of computing components. They invented the x86 series of microprocessors that was in use for decades, and ultimately the x64 platform that is used in virtually all Intel-based systems to this day. So on the plus side, for Intel, the performance is always high.
And Intel processors are always able to at least compete against almost any other manufacturer, even if they are slightly outperformed in certain areas. And they have very high compatibility with other components, such as motherboards. In other words, you would virtually never have to go on the hunt to find a particular motherboard that supports Intel processors. In fact, they would be the most common.
Plus, they tend to run a little cooler than AMD, which often makes them the preferred choice when you're dealing with an enclosed space, such as a laptop or a rack-mounted server where there's just not very much room for cooling. On the downside, they generally cost a bit more than other processors and they sometimes pull a little more power as well, which in the long run can increase your operating costs if, for example, you're in a very large datacenter with thousands of systems.
It might not add up too much if you do a direct side-by-side comparison of just two systems. But for thousands of systems over the run of, let's say, a year, it will add up. Lastly, I do want to stress that in most scenarios, for most people, the difference between each brand may not be noticeable or even a concern.
For example, if we're talking about standard home computing, where you're just using basic applications such as Internet and email or even office-type applications like word processors or spreadsheets, most people would never notice any difference in terms of the performance. But there are certainly instances where the performance characteristics of one brand over the other would be required.
So in those cases, your best bet is to try to find direct benchmark comparisons that report on the actual performance values being achieved in specific situations. Ultimately, that's about the only way to get an accurate idea as to which brand offers the better performance. But even then, it's not uncommon for each brand to barely outperform the other, or in other cases, even if there is a clear advantage of one over the other, that may not translate into real-world improvements because there are almost always many other factors to consider.
Benchmark tests simply focus on a single performance characteristic that is examined under very specific conditions. So once that processor is implemented in a different system, in a different network, with different types of memory, graphics and storage, those benchmark values may not always hold true.
And one other point to finish up, AMD and Intel have been in this tug of war, if you will, for quite some time, so it's very often the case that one of them is at the top of the market for a particular amount of time. But inevitably, one will come out with a new model or a new feature that entirely supplants the other. But then, ultimately, the same thing will happen, but in the other direction.
So in short, each one has had their moment in the sun. So if you're considering a new purchase and debating which brand to use, I simply suggest some research at that time. In other words, don't rely on something that you may have heard about one versus the other that may have been true a few years ago, because it might not be true anymore.
Similarly, you shouldn't go buy the opinion of any one person who claims that one is always better than the other. This is an ongoing battle with many back and forth victories, and it will likely continue for quite some time. So again, just try to do some research at whatever time you are considering the new purchase or the new device or the upgrade, or whatever the case might be.
Cooling Mechanisms
In this video, we'll provide an overview of several options for dissipating heat in your system, beginning with what's known simply as the heatsink. Now, this really can be any device that is used to cool the system components, but most notably the CPU, the GPU or the graphics processing unit on a video card, and the main power supply.
Now, for components such as the processing chips, many heatsinks are implemented in the form of an aluminum alloy radiator, that is placed directly on top of the chip itself, which then absorbs and dissipates the heat. But many models will also incorporate a fan that attaches to the top of the radiator to help dissipate that heat even faster. And these days, modern computers simply could not run at the speeds they do, without a heatsink.
As technology has improved manufacturers are driving their processors harder and harder, and the faster they go, the more heat they generate. So in that regard, they wouldn't just get warm to the touch. Without a heatsink they could easily burn you on contact, and the system itself would overheat in a very short amount of time, which could in turn damage the entire system.
Now, there are two main types of heatsink: an active and a passive. An active heatsink means that it has some kind of fan to assist with dissipating the heat, so they're often referred to as a heat sink and fan or an HSF. So it's a combination, if you will. But they may also have some kind of liquid cooling system if the HSF isn't adequate, but we'll talk about those in greater detail in just a moment.
But in short, any active heatsink uses more than just a radiator. By contrast, a passive heatsink uses only a radiator. Or to put that another way, they don't have any mechanical components. But with no moving parts, a passive heatsink is just about 100% reliable because, of course, a mechanical fan or a liquid pump could fail, but a passive heatsink is nothing more than the radiator itself.
Now they're usually quite large in comparison to the processor, and there would often be many fins, if you will, which create a much larger surface area, which in turn dissipates the heat through convection, which means that the heatsink simply relies on the fact that warm air naturally rises, carrying it away from the processor. But that said, most passive heatsinks will still require some kind of steady airflow, but that could come from fans in the main power supply or those mounted in the case itself.
But any level of airflow will ultimately ensure that the heat can be dissipated more effectively. So as mentioned, there are liquid-based cooling systems available, most of which just use water, but in terms of functionality, there is simply a reservoir as the main supply and a pump to circulate it throughout the system to cool and carry heat away from components such as the processor.
Because water holds its temperature very well once it has absorbed the heat from the components, it provides a higher level of cooling for systems that generate a lot of heat, such as gaming computers, where fans simply might not be up to the task.
Once the water has absorbed the heat, it's passed through a condenser coil radiator, which dissipates that heat from the water and cools it down again, making it ready for its next cycle, if you will. Then for the most demanding systems, there are refrigerated cooling systems, which are still liquid based but that liquid itself is also refrigerated to further increase its cooling capabilities.
So in terms of the components, you will likely see an evaporator to handle water vapor that may condense on the very cold components, a condenser and a compressor, like what you might hear on any refrigerator or air conditioner, which receives high pressure gas from the condenser and compresses it into a liquid. Some kind of flow control device to manage the rate of cooling and insulated tubing, so the fluid will stay as cold as possible for as long as possible.
So given the increased complexity of refrigerated cooling systems, they aren't particularly commonly found except for in very particular situations. But on the topic of refrigeration, I should also mention standard air conditioning, which is very commonly used in large data centers that house many servers and networking components in a single place.
Simply having so many devices in one location that all produce heat will likely produce an excessive amount of heat in that room. So, if the overall ambient temperature can be kept as low as possible by using air conditioning, the devices will stay inherently cooler because they would all have fans that draw in air from the surrounding environment. So the cooler that air, the more effectively those devices themselves can be cooled.
Lastly, one other component you may encounter is what's known as thermal paste, which is simply a high heat paste that produces better conduction between the components. So coming back to the standard heatsink, such as a radiator, they're designed to be mounted directly onto the surface of something like the CPU.
So the paste ensures that there is an airtight contact between the two, and as a conductor, it helps to move as much heat as possible from the processor to the heatsink. And the more heat that can be removed from the processor, the better it will perform and the longer it will last.
Expansion Cards
In this video, we'll take a look at some common examples of expansion cards and some of their basic properties. So beginning with video cards, these were among the more demanding devices when it came to expansion cards, simply due to the fact that video information requires a lot of processing and memory.
And while many desktop systems do offer an integrated video output directly on their motherboard, they would almost never be adequate for anyone needing advanced graphic capabilities such as gamers, animators and graphic designers. So video cards have used a variety of expansion slot architectures over the years, including PCIe or PCI Express, which would be the most common type these days, the accelerated graphics port or AGP, which was typically a dedicated slot on the motherboard just for video cards, and the original PCI.
I doubt that you would find any modern systems using either AGP or PCI, but there may still be some legacy systems using those slot types. Modern implementations of advanced video cards also include large amounts of dedicated processing and memory, so significant cooling is also often required. So if you're considering a purchase, be sure to take note of the power requirements, slot availability, and the amount of actual space inside the system as well.
Because these very high-level cards are physically quite large in many cases, measuring 8 to 10 inches long, maybe 4 inches wide. And because of the fans and multiple outputs, they will often require two spaces for what otherwise normal cards would require. Now, this graphic would be an example of a legacy video card that would only require a single space in the back of your case.
But many models at the time did have multiple output types, including VGA, S-video and DVI, due to the fact that there were several different interfaces in use on various types of monitors. Now, multiple port types might still be the case, but the more modern versions of video cards would likely have multiple HDMI and or DisplayPort connections. And if it was only one or the other, there would still likely be up to four connectors to support multiple displays.
Next is the sound card, which, as its name indicates, is an expansion card used to produce sound. And again, almost every system these days would have some kind of integrated sound, which would be fine in most cases for just listening to music or watching videos, but for professional applications such as working in a recording studio or doing sound editing, a dedicated sound card would be required. Like the video card, most modern versions would use the PCIe interface, but you may still find some cards using the original PCI, particularly if the system is a bit older.
So again, this graphic would be an example of a fairly standard sound card, with several inputs and or outputs for devices such as external speakers or headphones, a microphone jack and possibly a game controller.
And while many of the eighth-inch mini jacks would still likely be present on modern cards, you might also find RCA connectors, USB ports and SPDIF or the Sony Philips Digital Interface, which can transfer digital audio information from one device to another, without needing to convert it to analog first, which can degrade the signal. A network interface card or a NIC allows the system to connect to your network.
And these days, almost all desktop systems at least would have a network interface built in. Now, laptops would as well, but most newer laptops only have Wi-Fi. Many, in fact, do not come with a built-in Ethernet interface, but you could, of course, get a USB-to-Ethernet adapter. And while video and sound cards would almost always be preferable to the integrated components, in many cases, the integrated network interface would likely be adequate for most users in most situations.
While a dedicated card would likely have more configuration options for just standard connectivity to something like an Ethernet network, the integrated interface would most certainly suffice. The network interface, regardless of its type, is also where a unique hardware address known as the media access control, or MAC address, is assigned. And in fact, this is done at the manufacturing level. Every single network interface ever created has a unique MAC address assigned to it that never changes, and ultimately, this is how your particular device is identified.
Now, most of us use IP addresses in our own respective environments, but a MAC address is also known as a physical address because it is written directly into the firmware of the interface and it never changes. An IP address can be changed at any time, so it's referred to as a logical address. And as long as your interface and my interface are on different networks, we could in fact each have the same IP address.
But once again, the MAC address is always directly attached, if you will, to the specific interface of that device. Most network cards are relatively simple in their physical configuration, in that most modern implementations would only have a single interface such as an RJ45 connector for a standard Ethernet cable.
Although there may be other interfaces such as USB, which can sometimes be used for monitoring or gathering statistical data from the card, but in terms of networking functionality, the RJ45 connector would be the only required interface. USB expansion cards are used to simply add more USB interfaces to the system.
Now, virtually every system in the last 25 years will come with built-in USB connectors. But since so many of us have so many peripheral devices these days, in many cases there just might not be enough connections. Now, it should be mentioned that there are also many types of external USB ports available, but if you prefer to have everything connected directly to the computer itself, then USB expansion cards allow you to do just that.
So, these are among the simplest types of cards in terms of physical configuration. You just install the card into one of your available slots and there's nothing more than additional USB ports along its connector panel.
The only difference between any given implementation might be the physical connector type you need, for example, perhaps you might need both the original Type-A connector and the newer Type-C, and possibly the total number of connectors overall, but that would simply be a matter of choosing the correct card for your needs.
Lastly, a video capture card can be thought of as the opposite of a video card, in that a standard video card sends signals out to a display device, whereas a capture card accepts incoming video from an external device such as a video camera or anything else that can output video data. Now, while they can certainly be in the form of an internal card, capture devices are also very commonly implemented as external devices, that are implemented as little more than adapters.
For example, in this graphic, there are standard RCA jacks which could be connected to the output device, which are then adapted to a Type-C USB connector, which acts as the input. Now, you would also require some kind of software to control the capture, but that might be included with the device, or perhaps it could be downloaded from the vendor's website, or perhaps you already have a third-party application that is compatible.
Regardless of the type of expansion device, the primary considerations in most cases include whether you have the physical space in your system to install it in the case of internal cards, and of course, exactly which type of functionality you require for your situation. But these days, there is likely a device for almost any situation. So if you feel that your system is lacking, there is likely a suitable expansion option available.