Linux Basics

Linux is the alternative to Windows. While it is free and opensource, the main attraction to Linux is that it doesn't track you.

Linux books on Amazon

Table of Contents

Introduction To Linux

Linux is a direct descendant of the Unix operating system. Unix was created by government researchers who needed some custom tools. It spread to universities and students popularized it greatly.

Berkeley also played an important part because they modified it extensively. This became known as BSD, the Berkeley Software Distribution. At the same time there was also Unix System V, which came from the version maintained by Bell Labs.

Linus Torvalds

Torvalds was a Finnish student in the early 90s. He was working on the core of that would be the Linux kernel as we know it. After it was finished, he combined it with the GNU operating system for the applications.

Linux the name comes from the combination of Linus and Unix.

Distributions

Neither a kernel nor applications make a complete operating system. So putting them together was a must. It just so happened that different parties each the separate part ready. Others have since as well.

The combination of a kernel and related packages that run on that system are known as a distribution. There are hundreds of distributions today.

They include development systems, word processing, spreadsheet software, music players, and many other nice utilities. Fedora, Opensuse, and Ubuntu are great ones to get started with.

Software

There are tons of nice packages available for Linux systems today. Most are free, but you can buy some also that will include extras like nice support built in. Graphics tools, web servers, and networking utilities are some of the most popular packages.

Supported Platforms

Almost everything today will run Linux. Intel, Mac, IBM, and Arm based computers all run Linux and does so very well. In fact, Linux is only getting more popular.

Portability

Originally, Unix messed this part up because vendors made versions all for themselves. So the market was very fragmented. However, Linux was written most in the <C> language.

This allowed it to be portable between other systems. Doing so allowed it to spread much quicker than Unix ever could.

Now, Linux is used everywhere and for any type of system.

The Kernel

The kernel’s job is to distribute the computer’s resources. The resources of a computer are things like cpu and memory. Peripherals need access to these items as well, so the kernel will make sure each one gets what it needs.

Software will request resources through system calls. The kernel then gives the software what it needs.

Multiuser Support

A Linux system is designed to have many users on one computer. This gives them each their own little area of the operating system and storage. This makes cost a lot less. In fact, this was often done to save money.

An organization could have one nice machine and dumb terminals to access located anywhere in the building. It is probably still a good idea if you think about it. Another advantage to this is that it makes the machine more cost efficient.

No one can use all of a machine’s resources constantly. So if one person is using it then most of the resources are going unused. This goes hand in hand with the task system.

Since Linux is designed to handle multiple users, it can also handle many concurrent tasks at the same time too. This means that each user can run many processes at the same time.

Bash And Other Shells

A shell is a command interpreter. This is just an interface to the core of the operating system. It allows you to run commands and have them act instantly. It is a very powerful concept.

Bash is the most popular shell, but there are many other ones. Some are older, but there are newer ones too. Each user on a machine can use his own shell if that is his preference. This allows for nice customization.

Desktops

Originally, computers were mostly used with shells. This involved users issuing commands as needed on a machine. They could do calculations, manage a server or use a text editor.

Eventually, however, a GUI was created, and these were the first desktops. When I say desktop, I am referring to the graphical system that lets you do the same tasks as a shell.

The Gnome, Cinnamon, and KDE desktop environments are some of the most popular today. They each have a very different style, but they are also fun to learn. They are fun because each has its own advantages.

Today, you can even get desktops with certain spins built in to them. A spin means they come with certain software packages that do a certain role.

For example, I could download an Astronomy spin that would include many types of Astronomy software. That is a really cool feature, by the way.

Utilities

Linux comes with many types of useful programs called utilities. These all do some unique task and do it very well. These are the basis behind the commands that you use in a terminal window.

I can check the speed of my system, disk space, free memory, cpu usage by process and the list could go on and on.

Application Development

This is one of my favorite features. Almost every distribution has program development built in to its core. Compilers and interpreters are there. Several text editors are there too. Support for several languages comes right out of the box.

You can start with C++ or C immediately after an install. In one distribution I have, a very nice Python pdf book is even included along with its support of course. Many times an IDE is also included if you prefer that kind of workflow.

Whole books have been written about the history and usage of Linux. It is very rich in history and you can spend a lifetime learning useful things that you can with these Linux essentials.

Did I mention it is free and has the best computing community in the world? While it came from Unix, it has far surpassed its digital parent. There is a distribution for everyone.

It does everything and more that a Windows and Mac computer can do. This is because 99% of the software is free and easily installed.

Things To Know Before You Install Linux

Installing Linux is not difficult, but there are some details you should be aware of. You don't want to lose data. 

Formatting The Hard Drive

A new hard drive will have to be specially prepared by its manufacturer. They then send it to a retail store or reseller. Once in a consumer’s hand, it can be partitioned. A partition is a logical section on the drive.

It will have a device name to make it easy to refer to. With certain utilities, you can resize and change most partitions. When you partition a drive, you are creating a partition table and a filesystem. The table contains all the information on the partitions.

The filesystem is how data is read. It shows where the data is stored on the drive. It does this by mappings called inodes. There are many kinds of filesystems.

They each have their advantages. Most installation utilities will do these steps automatically if you prefer.

While formatting is not something you do every day, it is useful to know that it happens. You lose whatever data is on the drive when you format it. If it is a new system, then it is not a concern.

However, if this is an older drive then you will want to back up your data. Losing good data is not good.

Setting Up Directories

You have probably heard that everything in Linux is a file. This is really pretty true. Every file in a system has a unique identifier. These are path names. The entire path name is the identifier.

So, /home/music has a separate identifier than /home/documents. Notice also that Linux systems use the slash that is closest to the right-shift button.

I admit this may be out of the realm of first users. Setting your own directories in a filesystem is for advanced users. It is possible that you need to do it, so I am mentioning it for this. For instance, you could be asked to by your boss.

Mount Points

A filesystem needs to be mounted. It will have a specific mount point. A mount point is therefore the directory where the filesystem is placed. Most installation programs will do this automatically.

However, it is good to know that this takes place in the background. You might want to customize this process in the future. There can be multiple filesystems within a system. They can be different ones and hold different files.

There is a file that holds the filesystem information and it is called the /etc/fstab file. This is configurable if you want to adjust settings one day.

Making Partitions

Every specific distribution will have its own installation program. These programs will usually take care of steps like partitioning. However, it is important to know that you can usually do it yourself.

Some people have specific needs for what they want their setup to look like after they finish it. To do this correctly, they will have to manually setup their disk. You can decide what partitions that you want and their sizes.

This can be very important. If you think you're going to need a large <swap> partition, then you can set this manually. Partition examples include /boot, /root, /swap, and /home.

There used to be good reasons to set up several other partitions on a Linux system. A lot of those reasons revolved around disk fragmentation. That is not too common anymore, as most new disks are SSD or NVME now.

These types of disks do not fragment. If you reinstall often, then it could be useful to separate your partitions so you do not have to recreate programs or data as much.

A /var partition could be useful. If your data changes all the time, then you might want to do this. The /log directory is often in here too. Standardizing where log files are kept is a good idea for everyone.

They are always the keys to what is wrong with your system, so it is important to know where they are quickly. Another popular partition is /opt. This is where installation packages are on your system.

It is handy to know where to find certain types of files like I stated above. Packages are no exception. If you need to distribute to other systems on your network, then it is easier if they are all in one place.

RAID

A redundant array of independent disks system is definitely something to consider. You will want to consider this if this is a server or any other machine with valuable data stored on the disks.

However, if you store your data on a remote server or a local device, then the extra cost may not be justified.

A RAID system uses two or more disks, partitions, or some combination of these two. It is a way to protect your data or add performance to your system. There are several RAID modes, and each has its advantages and disadvantages.

RAID can be hardware or software. Hardware RAID is usually in the form of addon cards within your system. They can contain processing power and often some cache memory. Software RAID is built in to Linux systems through certain utilities and is usually the better choice.

A long time ago hardware RAID was more popular because system hardware had progressed little. In current times with SSD’s, high-powered processors, and systems with 16-128 GB of ram, software RAID is the way to go.

The main reason administrators use RAID is to help protect their important data from hardware failure. It should not be the only tactic you use, just like you should not only have one backup of your data.

Software RAID is what I use when I deem it necessary. It also costs nothing because the Linux kernel controls it. It is also more powerful and gives greater flexibility to your system. The downside is that it takes more skill to set up.

Understand what mode you desire, how to use the utilities to implement it, and know how to query your system to find out the kinds of devices it has internally.

Logical Volume Manager

LVM is a great utility. It gives you the chance to implement it when you first do any installs. I highly recommend doing so. It gives your system great flexibility. So what does it do? LVM allows you to control your logical volumes at a moment’s notice.

You can add more space at any time. A logical volume is like a partition, except it is adjustable as you need it. Partitions are not and are pretty much set in stone.

This works by taking any physical part of your drives, which include disks and partitions and grouping them into a storage pool. With these individual parts in a storage pool, you use the LVM to group them how you need them to appear to your system.

You can also change these groupings and their allocated space at any time.

Using The Shell In Linux 

When people talk about using the command line, they are really referring to the Shell. This is accessed by your terminal window which you run commands in. The shell itself is just a program that works behind the scenes. Almost all distributions of Linux have one included with their version. There are several different versions of Shells also. Some of these are Bash, Zsh, and Fish. 

 

There are also pieces of software called terminal emulators. These small programs help you talk to the Shell. This is something like Konsole or Terminal depending on your distribution. 

 

Your Shell prompt is where you type in commands. If the last character is a ‘$’ then you are a regular user. If the last character is a ‘#’ you are running as a root user which gives you superpowers in the Linux world. 

 

The Shell will give you access to your command history. You see the command history by using the up arrow on your keyboard. Keep pressing the up arrow to see more of your commands you have used. This is useful because you can just use the up arrow to redo commands instead of retyping a long command. Most distributions remember around a thousand of your last commands. 

 

Let’s start using some basic commands. Type the command and then hit enter:

Date

You will see the current time and date pop up.

Now, try the ‘cal’ command:

Cal

You should get a view of the current month. I like to use the ‘cal’ command as I am always forgetting what day it is and it is quicker to use than most other calendar systems. 

Another useful command to use is ‘df’ which tells you how much free space is on your system.

Df

There is a useful parameter you can run with this command and I recommend using it:

Df -h

This makes the output easier to read. I will get into parameters and options later on.

The next command to learn is the ‘free’ command. We will also add the ‘-h’ parameter after it:

Free -h

This output tells you about the memory on your system.

 

Navigating Your File System

The Linux file system looks very different from a Windows file system. It is mainly because everything is named differently. The file system is organized by directories. These directories can contain either files or more directories. In Windows, they are called folders. I will use directories from here on out though. 

 

The first directory in a Linux system is called the ‘root’ directory. It contains everything else on the local system. Linux has a single file system for everything in or attached to that computer. It is important to remember this when navigating. An external storage device is mounted or attached to somewhere in the file system. 

 

To see where you are at any time, use the ‘pwd’ command. This stands for present working directory.

Pwd

It gives a simple one line of output. Mine says:

/home/jason

Whenever we start our computer session in Linux, we start at ‘/home/username’. My username is Jason of course. We can change that later if we want to but that is not important right now. 

 

To see what files are in a directory, we use the ‘ls’ command.

Ls

This command can be used to see the contents of any directory if you know the path. We already know one because we are part of it. It is our ‘home’ directory.

Ls /home

You can also see the contents of the whole computer by looking at the ‘root’. To see the ‘root’, we use ‘/’. So try this:

Ls /

This shows you everything at the ‘root’ level. See the ‘/home’ directory? Your user directory is located within that ‘/home’ directory. Hopefully you can see how your system is organized now. 

 

This brings us to moving directories. We move to a different directory for various reasons. Often, we just want to work from that directory. While we can see what is there by using the path or make a file and put it there, it is easier to just be in that directory. To get to that directory we use the ‘cd’ command:

Cd /home/jason

This is called using an absolute path because we started at the ‘root’ directory denoted by the first ‘/’ and then listed the directory structure until we got to our directory under ‘/home’.  We can also use relative pathnames. It is called this because it is relative to our present directory. So:

Cd ..

Will move us up one directory from our present working directory. 

‘Cd’ is a very helpful command. It allows for fast movement if you use a few tricks.

To instantly go to your ‘/home’ directory:

Cd

To change the working directory to the previous directory:

Cd -

Doing More With The Shell

Using a shell gives you great satisfaction. It does have a learning curve but, it is well worth it. I am assuming you have no prior knowledge. Taking it slow and using it every day is the best way.

Files and Directories


Files are where your data is kept. A file can be many things. When you are storing input, it goes into a file. This can be a text file, a drawing program, or a sound file. These are some Linux essentials you can't forget.


Directories are organizational structures. They can organize your files and other directories. At any one time, you will be in a distinct directory. You have to be logged in to have a current working directory.


The Shell


A shell is the interface to the operating system. It is text based and it accepts input as text. The input will usually invoke small programs or utilities that are installed in the operating system.

There are many different shells but the most common one is <Bash>. This is part of the history and usage of Linux.


When you first log in, the operating system will put you in your home directory. You can change this behavior, just so you know. When you change directories, you can always find out where you are.


I can enter in the command:


pwd

and it will tell me what directory I am currently in.


Now, when you invoke a utility like "pwd" the shell executes this command. What it does and what you will see from then on depends entirely on the utility and what it is designed to do.

You can also modify commands. This is done by the use of "arguments".



pwd -L              "use from the environment"

pwd -P              "avoid all symbolic links"

pwd --version       "output version information and exit"

pwd --help          "display help and exit"



You can also have multiple arguments for a command. This can greatly change its
behavior.


Certain commands require certain arguments. A "cp" command, which copies, needs
to know what it is copying and where it is copying to.


cp directory1 directory2

You can also have options for any particular command. They are called "options"
because you do not have to use them to get the command to work. They work like
arguments, however, they extend the behavior of that command.


Options and arguments are usually preceded by a hyphen or two depending on
the command. If you need to use multiple arguments and options, then use a
single hyphen with the corresponding letters.


pwd -LP

As you can see, there are no spaces in between the options. Most of the time
it does not matter in what order you put the arguments or options.


Most utilities will have a help feature.


pwd --help

It works the same for most commands. It will give you a lot of details about
the command. Arguments, options, and examples are very helpful to understand
how a command is supposed to be used.


Using Commands


You usually have to be in the directory of a utility in order to run it. The
exception to this is, of course, how the path is set. The path is a  variable the
operating system uses to check directories for programs to run.

That makes it very useful so you don't always have to be in the /bin directory for example.


Of course, this was never the case as the path variable was always used.
However, if you did not have a path set somehow, you would have to be in the
directory to use any utility you wanted.


There is a trick to run a program without using the path.


./script1.sh

This lets you run a utility without using the path variable. This can be useful
at times. Experienced users should not need to do this much. Keep it in mind as
an option though if you need it sometime.


Redirecting Output


You can redirect the output of commands. The output can be sent to another command or even a file.


pwd > test.txt


This will run the "pwd" command, which tells the present working directory. The results or output will be sent and stored into the test.txt file. This is very flexible and should be used when you need to do something like this.


This operation will delete the file if there is another with the same name. Be careful using it.


Redirecting Input


Just like output, you can redirect input. This is most often done with files. A file can contain a book list, for example. Commands like <cat> or <grep> can have the fileinput sent to it.


cat < booklist.txt

grep Magnus < booklist.txt


Pipelines


You can connect two different commands through the use of a pipeline. This is
the pipeline symbol < | >. When it is used, it takes the output of the first
command and sends it to the input of the second command.


This is very similar to redirecting output and sending it to a file. The difference is that we are just dealing with commands. This makes the pipeline very flexible and good to use when appropriate.


ls | lpr


The above example takes the output from the <ls> command and sends it to the <lpr> command. The <lpr> command is a print utility, so <lpr> will print the
files listed by <ls>.


 who | sort


 This example takes the output of the <who> utility and sends it to the <sort>
 utility. A list of users on your computer will be alphabetically sorted by this one command.


 who | grep jmoore


 This is another good command to use. The <who> utility lists users and the
 <grep> utility searches for patterns that you specify. We want to search for a user.

If you have a bunch of users and you need specific information, then use this to get your list and send the output to the <grep> utility.


There are many utilities that will work for this. Don't worry about knowing all at once. Over time, it gets easier to put them together when you need specific information. You can also use three or more utilities at once with pipelines as long as nothing conflicts.


 Background Commands


 You can run commands or utilities in the foreground or background. Most of
 your commands will be in the foreground. There are good times when you want
 to run them in the background though.

If a command will take a long time to run, then it is a good candidate to run in the background.


The reason you would want to do this is that it frees up your shell for your to run other commands and do other tasks. When you run a command in the background, it is now a job.

The shell keeps track of it and assigns it a job number. You can even query this job number to check on the progress of the job.


You use the <&> sign to indicate the current command is to run in the background. One thing I do a lot is update computers on my network. I have a script I wrote for this.


  updates.sh &


 This will run my script in the background as a job. I can do other things because it is going to take a long time. This makes it very useful.


 To use an earlier example, you can do it with whatever you need to print.

 

 ls | lpr &


Again, this throws the output of <ls> into the <lpr> print utility and prints everything in the background.

Commands can have options and arguments that you use after the command. These will modify the behavior of the command itself. When you enter a command it needs to listed in the path variable or you need to be in the current directory of the program.

You can chain commands through the use of pipelines. Pipelines use the <|> symbol. They take the output of the first command and send it to the input of the second command.

Commands can also be run in the background. This is another useful feature that will enhance your productivity. If there is a long task to run, start it and have it run in the background.

It will go away from sight but still be running. You can then use your shell to do other tasks like create new users or modify permissions on files.   

Filtering Text In Linux

Filtering text allows you to do many efficient tasks in Linux. Displaying and sorting text is one of the most common tasks that you will do. This section is an introduction to filters in order to create pipelines for your workflow.

Introduction

Filtering text is the process of capturing text, doing something with it, and then sending it to the output stream. Most commonly, the output from one command is taken and redirected to the input of another command. This is usually accomplished through pipes and stream operators.

Streams

A stream is a series of data. There are input and output streams. Data flows both ways. Streams can be sent to a terminal, a file, or a network device. There are three main types:

  • stdin
  • stdout
  • stderr

The first, stdin, sends input to commands. Next, stdout, displays output from commands. Then, stderr, shows errors that were produced.

Pipes

The pipe symbol, “|”, is one way to redirect output from one command to the input of another. Input can come from a command or a file. You can make a long sequence of commands using pipes. The output is usually shown in the terminal.

Output Redirection

The operator, “>”, can send output to a file. This is what you want to do if you need to save the results. Once you have data in a file, you have many more options. You can show the contents of a file, see any special characters associated with it, and split a file into two pieces.

The Cat Command

The cat command can show the contents of a file and create files. By default, it reads from stdin unless you specify a file to read from.

echo -e "1 teamup\n2 unbroken_bonds\n3 unified_minds\n4 cosmic_eclipse" > edition.txt
$ cat edition.txt
1 teamup
2 unbroken_bonds
3 unified_minds
4 cosmic_eclipse

In the first snippet we just sent some data to a text file that we created at the same time. Then we showed the contents of the file in the second snippet. This shows you how it works.

Let's make a second file now.

echo -e "1 breakpoint\n2 breakthrough\n3 ultra_prism\n4 celestial_storm" > edition2.txt

Make sure the output is what we expect.

$ cat edition2.txt
1 breakpoint
2 breakthrough
3 ultra_prism
4 celestial_storm

The cat command also concatenates files. It just so happens that we have two files, ready for joining.

$ cat edition*

The asterisk is short for getting everything that has the partial name of "edition". 

1 teamup
2 unbroken_bonds
3 unified_minds
4 cosmic_eclipse
1 breakpoint
2 breakthrough
3 ultra_prism
4 celestial_storm

This sends everything in those two files to the screen output. We can do something else cool, we can just make a third file with the contents of the first two.

$ cat edition.txt edition2.txt > edition3.txt

This makes a third file that contains the contents of the first two.

$ cat edition3.txt
1 teamup
2 unbroken_bonds
3 unified_minds
4 cosmic_eclipse
1 breakpoint
2 breakthrough
3 ultra_prism
4 celestial_storm

That is really useful text manipulation. This also showcases the flexibility of the "cat" command.

Wordcount Command

We can use this utility, "wc", to get more information from a file. This is handy if we know nothing about a file.

$ wc edition3.txt
  8  16 119 edition3.txt

We used this on the file we just created. It shows us the lines, words, and bytes in the file. It is very nice if you need to examine a file. The file may be thousands of lines long, you don't want all of that in your terminal output. If it is huge like that, you have another option.

Tail Command

The tail command can show you the last lines of a file. By default, it shows you the last ten lines.

$ tail edition3.txt

My file is small but if it was large, that is the usage you would want to try first. 

Head Command

The head command is the same as tail, except it shows you the first lines of a file. It is used in the same way. 

Rsync

Rsync stands for remote synchronization and it transfers and syncs files. It can do this locally and remotely. It is a very good tool. Though it has a learning curve, it is not hard to pick up. Its main use is to copy files and directories between two different computers. It can look at files and only send what has been changed. It can preserve all kinds of links and metadata.    

Installing Rsync     

If you do not already have it installed on your system, you will need to install it. I am running Fedora. If you are running another distribution, use whatever package manager you have to install it.

 

On Fedora run:

dnf update -y

 

This will update your files. Then:

dnf install rsync -y

 

This will install rsync to your system if it is not already there.

 

Now run:

which rsync

 

This will show you where it is installed on your system

Then run:

rsync –version

 

That shows you the version you have.

 

Copying Files

Copying files is really easy. It is:

 

rsync -v source destination

 

The -v option means output will be given verbosely

Source is the full path of the source file unless you are in its directory already.

Destination should be the full path unless it is in your current path too.

It looks like this:

 

rsync -v program1.cpp Documents

 

In the above example I was already in the directory of the file I wanted to copy. You should do that when you can. I transferred it to the Documents folder.

Another example that is slightly different:

 

Rsync -av /home/jason/documents /home/jason/Writing/

 

This command copies all the files in Documents to my Writing folder.

You can use the ls command to look and make sure everything is transferred as expected.

 

Ls Writing/

 

There are many reasons to make copies of your files. Backing up important files to another remote location is something we should all do more.

 

Whenever you do a file transfer, it is a good idea to switch to that location and make sure it is copied over. Doing this a few times will instill confidence in your command line abilities.

 

The Trailing /

The trailing slash at the end of a path dictates whether rsync will copy the contents of a directory or the entire directory with the folder included. Excluding the / from the source path copies the directory to the source destination.

 

# This command will copy the Writing directory and its contents to the backup drive

Rsync -avz /home/jason/Writing /path/BackupDrive/

 

# This command will only copy the files in the Writing directory to the backup drive.

Rsync -avz /home/jason/Writing/ /path/BackupDrive

 

This is a small difference but it is very important to get right.

 

Copying Contents of Directories

It is often very useful to copy entire directories at once. It is easy to do this. Use:

 

Rsync -av     /source/     /destination/

 

Just use the full paths of the source and destination

So, something like this should get the job done:

 

Rsync -av /home/jason/Documents/ /home/jason/Backup/

 

Copying Directories to other Directories

If we want to copy a folder to another folder then we do this:

 

Rsync -av /home/jason/Documents /home/jason/Backup/

 

You should look inside the directory to make sure you typed the command over correctly. You should see the folder nested in there.

 

Copying A File Remotely

Rsync lets you connect to different machines. This makes copying files to other machines an easy practice. You will need:

  1. File path from local machine
  2. IP address of remote machine
  3. File path on remote machine
  4. Root access to remote machine

The command will look something like this depending on what you need to do:

 

Rsync -v /path/from/local/machine     [email protected]:/root/remote/path

 

Copying Directory To Another Drive

This is very handy and gives you better protection. It is also easy to implement. 

 

Rsync -av /home/jason/Writing /path/BackupDrive

 

As usual, go and look to make sure everything happened the way you expect. After a while, you will not feel the need to do this.

 

Copying Directories Remotely

Rsync can handle remote directories just as easily as single files. When you run this command, you will be asked for its password. So, be prepared on this front. The command looks like this:

 

Rsync -av   /local/path   [email protected]:/root/remote/path/

 

Compressing Files

Rsync can compress files that it tries to transfer. This will speed up a transfer. If your transfer is very small, you will not see a difference. However, if you are doing lots of video, for example, this will be of great benefit. Do it like this:

 

Rsync -avz /home/jason/video /path/BackupDrive/

 

This command will copy the Video folder over to my backup drive.

 

Monitoring Your Progress

If we are doing a long transfer, we can monitor the progress. I like statistics so this is useful for me. The command looks like this:

 

Rsync -avz –info=progress2  /home/jason/Video /path/BackupDrive/

 

This will give you the results of your transfer.

 

Syncing Directories

Syncing directories is easily done. Keep in mind that sometimes files will be deleted and they will be gone. So, use this command after careful consideration. We use the –delete option with the regular command plus source and destination paths. This will look at the source directory and then make the destination directory match it. It looks like this:

 

Rsync -aP –delete /home/jason/Writing/ /path/BackupDrive/Writing/

 

Excluding Files and Directories

Rsync can easily look the other way during a command if you want it to. So, if I want to exclude a subfolder of my Writing folder, it will do that. Here is how.

 

Rsync -avzP –exclude=Algebra /home/jason/Writing /path/BackupDrive

 

We can also exclude files from a transfer or sync operation. If I want to exclude .mp3 files it looks like this:

 

Rsync -avzP –exclude=*.mp3 /home/jason/Music/ /path/BackupDrive

 

Options

  • -a = –archive mode and equal to several other flags at once. It tells rsync to sync recursively,transfer special and block devices, preserve symbolic links,modification times, groups, ownership, and permissions
  • -z = –compress. This option compresses the data that is sent to the destination machine.
  • -P = –partial and –progress. Using this option shows a progress bar during the transfer and keeps track of partially transferred files.
  • –delete. When you use this option, it will delete extra files from the destination folder that are not in the source folder. It is how you mirror directories.
  • -q or –quiet. Use this when you don’t want to see error messages
  • -e. Use this when you want to choose the remote shell to use