Linux Explained

Linux is the alternative to Windows. While it is free and opensource, the main attraction to Linux is that it doesn't track you.

My book on Cloud Computing with Amazon. It is only $0.99 and buying it helps support my family and I. Thankyou in advance.

Table of Contents

Introduction To Linux

Linux is a direct descendant of the Unix operating system. Unix was created by government researchers who needed some custom tools. It spread to universities and students popularized it greatly.

Berkeley also played an important part because they modified it extensively. This became known as BSD, the Berkeley Software Distribution. At the same time there was also Unix System V, which came from the version maintained by Bell Labs.

Linus Torvalds

Torvalds was a Finnish student in the early 90s. He was working on the core of that would be the Linux kernel as we know it. After it was finished, he combined it with the GNU operating system for the applications.

Linux the name comes from the combination of Linus and Unix.

Distributions

Neither a kernel nor applications make a complete operating system. So putting them together was a must. It just so happened that different parties each the separate part ready. Others have since as well.

The combination of a kernel and related packages that run on that system are known as a distribution. There are hundreds of distributions today.

They include development systems, word processing, spreadsheet software, music players, and many other nice utilities. Fedora, Opensuse, and Ubuntu are great ones to get started with.

Software

There are tons of nice packages available for Linux systems today. Most are free, but you can buy some also that will include extras like nice support built in. Graphics tools, web servers, and networking utilities are some of the most popular packages.

Supported Platforms

Almost everything today will run Linux. Intel, Mac, IBM, and Arm based computers all run Linux and does so very well. In fact, Linux is only getting more popular.

Portability

Originally, Unix messed this part up because vendors made versions all for themselves. So the market was very fragmented. However, Linux was written most in the <C> language.

This allowed it to be portable between other systems. Doing so allowed it to spread much quicker than Unix ever could.

Now, Linux is used everywhere and for any type of system.

The Kernel

The kernel’s job is to distribute the computer’s resources. The resources of a computer are things like cpu and memory. Peripherals need access to these items as well, so the kernel will make sure each one gets what it needs.

Software will request resources through system calls. The kernel then gives the software what it needs.

Multiuser Support

A Linux system is designed to have many users on one computer. This gives them each their own little area of the operating system and storage. This makes cost a lot less. In fact, this was often done to save money.

An organization could have one nice machine and dumb terminals to access located anywhere in the building. It is probably still a good idea if you think about it. Another advantage to this is that it makes the machine more cost efficient.

No one can use all of a machine’s resources constantly. So if one person is using it then most of the resources are going unused. This goes hand in hand with the task system.

Since Linux is designed to handle multiple users, it can also handle many concurrent tasks at the same time too. This means that each user can run many processes at the same time.

Bash And Other Shells

A shell is a command interpreter. This is just an interface to the core of the operating system. It allows you to run commands and have them act instantly. It is a very powerful concept.

Bash is the most popular shell, but there are many other ones. Some are older, but there are newer ones too. Each user on a machine can use his own shell if that is his preference. This allows for nice customization.

Desktops

Originally, computers were mostly used with shells. This involved users issuing commands as needed on a machine. They could do calculations, manage a server or use a text editor.

Eventually, however, a GUI was created, and these were the first desktops. When I say desktop, I am referring to the graphical system that lets you do the same tasks as a shell.

The Gnome, Cinnamon, and KDE desktop environments are some of the most popular today. They each have a very different style, but they are also fun to learn. They are fun because each has its own advantages.

Today, you can even get desktops with certain spins built in to them. A spin means they come with certain software packages that do a certain role.

For example, I could download an Astronomy spin that would include many types of Astronomy software. That is a really cool feature, by the way.

Utilities

Linux comes with many types of useful programs called utilities. These all do some unique task and do it very well. These are the basis behind the commands that you use in a terminal window.

I can check the speed of my system, disk space, free memory, cpu usage by process and the list could go on and on.

Application Development

This is one of my favorite features. Almost every distribution has program development built in to its core. Compilers and interpreters are there. Several text editors are there too. Support for several languages comes right out of the box.

You can start with C++ or C immediately after an install. In one distribution I have, a very nice Python pdf book is even included along with its support of course. Many times an IDE is also included if you prefer that kind of workflow.

Whole books have been written about the history and usage of Linux. It is very rich in history and you can spend a lifetime learning useful things that you can with these Linux essentials.

Did I mention it is free and has the best computing community in the world? While it came from Unix, it has far surpassed its digital parent. There is a distribution for everyone.

It does everything and more that a Windows and Mac computer can do. This is because 99% of the software is free and easily installed.

Things To Know Before You Install Linux

Installing Linux is not difficult, but there are some details you should be aware of. You don't want to lose data. 

Formatting The Hard Drive

A new hard drive will have to be specially prepared by its manufacturer. They then send it to a retail store or reseller. Once in a consumer’s hand, it can be partitioned. A partition is a logical section on the drive.

It will have a device name to make it easy to refer to. With certain utilities, you can resize and change most partitions. When you partition a drive, you are creating a partition table and a filesystem. The table contains all the information on the partitions.

The filesystem is how data is read. It shows where the data is stored on the drive. It does this by mappings called inodes. There are many kinds of filesystems.

They each have their advantages. Most installation utilities will do these steps automatically if you prefer.

While formatting is not something you do every day, it is useful to know that it happens. You lose whatever data is on the drive when you format it. If it is a new system, then it is not a concern.

However, if this is an older drive then you will want to back up your data. Losing good data is not good.

Setting Up Directories

You have probably heard that everything in Linux is a file. This is really pretty true. Every file in a system has a unique identifier. These are path names. The entire path name is the identifier.

So, /home/music has a separate identifier than /home/documents. Notice also that Linux systems use the slash that is closest to the right-shift button.

I admit this may be out of the realm of first users. Setting your own directories in a filesystem is for advanced users. It is possible that you need to do it, so I am mentioning it for this. For instance, you could be asked to by your boss.

Mount Points

A filesystem needs to be mounted. It will have a specific mount point. A mount point is therefore the directory where the filesystem is placed. Most installation programs will do this automatically.

However, it is good to know that this takes place in the background. You might want to customize this process in the future. There can be multiple filesystems within a system. They can be different ones and hold different files.

There is a file that holds the filesystem information and it is called the /etc/fstab file. This is configurable if you want to adjust settings one day.

Making Partitions

Every specific distribution will have its own installation program. These programs will usually take care of steps like partitioning. However, it is important to know that you can usually do it yourself.

Some people have specific needs for what they want their setup to look like after they finish it. To do this correctly, they will have to manually setup their disk. You can decide what partitions that you want and their sizes.

This can be very important. If you think you're going to need a large <swap> partition, then you can set this manually. Partition examples include /boot, /root, /swap, and /home.

There used to be good reasons to set up several other partitions on a Linux system. A lot of those reasons revolved around disk fragmentation. That is not too common anymore, as most new disks are SSD or NVME now.

These types of disks do not fragment. If you reinstall often, then it could be useful to separate your partitions so you do not have to recreate programs or data as much.

A /var partition could be useful. If your data changes all the time, then you might want to do this. The /log directory is often in here too. Standardizing where log files are kept is a good idea for everyone.

They are always the keys to what is wrong with your system, so it is important to know where they are quickly. Another popular partition is /opt. This is where installation packages are on your system.

It is handy to know where to find certain types of files like I stated above. Packages are no exception. If you need to distribute to other systems on your network, then it is easier if they are all in one place.

RAID

A redundant array of independent disks system is definitely something to consider. You will want to consider this if this is a server or any other machine with valuable data stored on the disks.

However, if you store your data on a remote server or a local device, then the extra cost may not be justified.

A RAID system uses two or more disks, partitions, or some combination of these two. It is a way to protect your data or add performance to your system. There are several RAID modes, and each has its advantages and disadvantages.

RAID can be hardware or software. Hardware RAID is usually in the form of addon cards within your system. They can contain processing power and often some cache memory. Software RAID is built in to Linux systems through certain utilities and is usually the better choice.

A long time ago hardware RAID was more popular because system hardware had progressed little. In current times with SSD’s, high-powered processors, and systems with 16-128 GB of ram, software RAID is the way to go.

The main reason administrators use RAID is to help protect their important data from hardware failure. It should not be the only tactic you use, just like you should not only have one backup of your data.

Software RAID is what I use when I deem it necessary. It also costs nothing because the Linux kernel controls it. It is also more powerful and gives greater flexibility to your system. The downside is that it takes more skill to set up.

Understand what mode you desire, how to use the utilities to implement it, and know how to query your system to find out the kinds of devices it has internally.

Logical Volume Manager

LVM is a great utility. It gives you the chance to implement it when you first do any installs. I highly recommend doing so. It gives your system great flexibility. So what does it do? LVM allows you to control your logical volumes at a moment’s notice.

You can add more space at any time. A logical volume is like a partition, except it is adjustable as you need it. Partitions are not and are pretty much set in stone.

This works by taking any physical part of your drives, which include disks and partitions and grouping them into a storage pool. With these individual parts in a storage pool, you use the LVM to group them how you need them to appear to your system.

You can also change these groupings and their allocated space at any time.

Exploring Your Linux System

Ls Command

The ls command is one of the most used commands. It lists directory contents. It will quickly become second nature. We can see directory contents and other information about files. It is used like this:

Ls

We can specify a directory:

Ls /Music

We can see more detail like this:

Ls -l /Music

 

As with almost all commands, there are options and arguments you can use with them. These options and arguments modify how the commands work. An option is usually a dash followed by a single character. These are called short options. There are also long options that do the same thing in most cases. A long option is two dashes followed by a word. You can even use more than one option at a time. So we can do things like:

Ls -a

Ls -h

Ls -ah

Options in Linux are case sensitive. It can be easy to forget this so keep it in mind. 

 

CD Command

The cd command lets you move around your system. It changes directories.

Once we look at our current location with the ls command, we can change to a directory that is listed there if we want to.

Ls

I am still in my home directory but I see a Music folder. What’s in there I wonder? Let us see.

Cd Music

Ls

Well, there isn’t anything in there yet. I need to fix that soon. I like music like everyone else. Hopefully you can see how this is helpful. Cd also has other things it can do like move backward one file level.

Cd ..

ls
Now, you see that you are back where you started. That is the basic usage for the cd command. It is enough for now.

 

File Command

The file command gives you the file type of the file you are looking at. 

File .bashrc

This shows you it is a text file. Since there are many types of files it is not always obvious what kind of file you are seeing. 

 

Less Command

The less command lets us look at text files and see a smaller portion. This is helpful when the file in question is many pages long. We look at text files so we can modify them. An example would be a file that controls settings. Another reason is that program code is text and that is how we make programs. We can use it like this:

Less .bashrc

We can now scroll up and down. I do not recommend changing anything yet, but you need to know how to examine a file. Files are how settings are done in Linux so it is important to know about. 

Type q to exit the less program

 

Symbolic Links

While you are exploring your system, you might see something that looks pretty weird after you use ls to view a directory’s contents. This is called a symbolic link. In fact, you can identify these symbolic links because they start with an <l>. This special file is also called a soft link. A soft link is like a pointer to a real file. It is useful because a program can change the pointer instead of countless versions of the file itself. This lets your system be much more efficient. 

Using The Shell In Linux 

When people talk about using the command line, they are really referring to the Shell. This is accessed by your terminal window which you run commands in. The shell itself is just a program that works behind the scenes. Almost all distributions of Linux have one included with their version. There are several different versions of Shells also. Some of these are Bash, Zsh, and Fish. 

 

There are also pieces of software called terminal emulators. These small programs help you talk to the Shell. This is something like Konsole or Terminal depending on your distribution. 

 

Your Shell prompt is where you type in commands. If the last character is a ‘$’ then you are a regular user. If the last character is a ‘#’ you are running as a root user which gives you superpowers in the Linux world. 

 

The Shell will give you access to your command history. You see the command history by using the up arrow on your keyboard. Keep pressing the up arrow to see more of your commands you have used. This is useful because you can just use the up arrow to redo commands instead of retyping a long command. Most distributions remember around a thousand of your last commands. 

 

Let’s start using some basic commands. Type the command and then hit enter:

Date

You will see the current time and date pop up.

Now, try the ‘cal’ command:

Cal

You should get a view of the current month. I like to use the ‘cal’ command as I am always forgetting what day it is and it is quicker to use than most other calendar systems. 

Another useful command to use is ‘df’ which tells you how much free space is on your system.

Df

There is a useful parameter you can run with this command and I recommend using it:

Df -h

This makes the output easier to read. I will get into parameters and options later on.

The next command to learn is the ‘free’ command. We will also add the ‘-h’ parameter after it:

Free -h

This output tells you about the memory on your system.

 

Navigating Your File System

The Linux file system looks very different from a Windows file system. It is mainly because everything is named differently. The file system is organized by directories. These directories can contain either files or more directories. In Windows, they are called folders. I will use directories from here on out though. 

 

The first directory in a Linux system is called the ‘root’ directory. It contains everything else on the local system. Linux has a single file system for everything in or attached to that computer. It is important to remember this when navigating. An external storage device is mounted or attached to somewhere in the file system. 

 

To see where you are at any time, use the ‘pwd’ command. This stands for present working directory.

Pwd

It gives a simple one line of output. Mine says:

/home/jason

Whenever we start our computer session in Linux, we start at ‘/home/username’. My username is Jason of course. We can change that later if we want to but that is not important right now. 

 

To see what files are in a directory, we use the ‘ls’ command.

Ls

This command can be used to see the contents of any directory if you know the path. We already know one because we are part of it. It is our ‘home’ directory.

Ls /home

You can also see the contents of the whole computer by looking at the ‘root’. To see the ‘root’, we use ‘/’. So try this:

Ls /

This shows you everything at the ‘root’ level. See the ‘/home’ directory? Your user directory is located within that ‘/home’ directory. Hopefully you can see how your system is organized now. 

 

This brings us to moving directories. We move to a different directory for various reasons. Often, we just want to work from that directory. While we can see what is there by using the path or make a file and put it there, it is easier to just be in that directory. To get to that directory we use the ‘cd’ command:

Cd /home/jason

This is called using an absolute path because we started at the ‘root’ directory denoted by the first ‘/’ and then listed the directory structure until we got to our directory under ‘/home’.  We can also use relative pathnames. It is called this because it is relative to our present directory. So:

Cd ..

Will move us up one directory from our present working directory. 

‘Cd’ is a very helpful command. It allows for fast movement if you use a few tricks.

To instantly go to your ‘/home’ directory:

Cd

To change the working directory to the previous directory:

Cd -

Doing More With The Shell

Using a shell gives you great satisfaction. It does have a learning curve but, it is well worth it. I am assuming you have no prior knowledge. Taking it slow and using it every day is the best way.

Files and Directories


Files are where your data is kept. A file can be many things. When you are storing input, it goes into a file. This can be a text file, a drawing program, or a sound file. These are some Linux essentials you can't forget.


Directories are organizational structures. They can organize your files and other directories. At any one time, you will be in a distinct directory. You have to be logged in to have a current working directory.


The Shell


A shell is the interface to the operating system. It is text based and it accepts input as text. The input will usually invoke small programs or utilities that are installed in the operating system.

There are many different shells but the most common one is <Bash>. This is part of the history and usage of Linux.


When you first log in, the operating system will put you in your home directory. You can change this behavior, just so you know. When you change directories, you can always find out where you are.


I can enter in the command:


pwd

and it will tell me what directory I am currently in.


Now, when you invoke a utility like "pwd" the shell executes this command. What it does and what you will see from then on depends entirely on the utility and what it is designed to do.

You can also modify commands. This is done by the use of "arguments".



pwd -L             "use from the environment"


pwd -P             "avoid all symbolic links"


pwd --version       "output version information and exit"


pwd --help          "display help and exit"



You can also have multiple arguments for a command. This can greatly change its
behavior.


Certain commands require certain arguments. A "cp" command, which copies, needs
to know what it is copying and where it is copying to.


cp directory1 directory2

You can also have options for any particular command. They are called "options"
because you do not have to use them to get the command to work. They work like
arguments, however, they extend the behavior of that command.


Options and arguments are usually preceded by a hyphen or two depending on
the command. If you need to use multiple arguments and options, then use a
single hyphen with the corresponding letters.


pwd -LP

As you can see, there are no spaces in between the options. Most of the time
it does not matter in what order you put the arguments or options.


Most utilities will have a help feature.


pwd --help

It works the same for most commands. It will give you a lot of details about
the command. Arguments, options, and examples are very helpful to understand
how a command is supposed to be used.


Using Commands


You usually have to be in the directory of a utility in order to run it. The
exception to this is, of course, how the path is set. The path is a  variable the
operating system uses to check directories for programs to run.

That makes it very useful so you don't always have to be in the /bin directory for example.


Of course, this was never the case as the path variable was always used.
However, if you did not have a path set somehow, you would have to be in the
directory to use any utility you wanted.


There is a trick to run a program without using the path.


./script1.sh

This lets you run a utility without using the path variable. This can be useful
at times. Experienced users should not need to do this much. Keep it in mind as
an option though if you need it sometime.


Redirecting Output


You can redirect the output of commands. The output can be sent to another command or even a file.


pwd > test.txt


This will run the "pwd" command, which tells the present working directory. The results or output will be sent and stored into the test.txt file. This is very flexible and should be used when you need to do something like this.


This operation will delete the file if there is another with the same name. Be careful using it.


Redirecting Input


Just like output, you can redirect input. This is most often done with files. A file can contain a book list, for example. Commands like <cat> or <grep> can have the fileinput sent to it.


cat < booklist.txt


grep Magnus < booklist.txt


Pipelines


You can connect two different commands through the use of a pipeline. This is
the pipeline symbol < | >. When it is used, it takes the output of the first
command and sends it to the input of the second command.


This is very similar to redirecting output and sending it to a file. The difference is that we are just dealing with commands. This makes the pipeline very flexible and good to use when appropriate.


ls | lpr


The above example takes the output from the <ls> command and sends it to the <lpr> command. The <lpr> command is a print utility, so <lpr> will print the
files listed by <ls>.


 who | sort


 This example takes the output of the <who> utility and sends it to the <sort>
 utility. A list of users on your computer will be alphabetically sorted by this one command.


 who | grep jmoore


 This is another good command to use. The <who> utility lists users and the
 <grep> utility searches for patterns that you specify. We want to search for a user.

If you have a bunch of users and you need specific information, then use this to get your list and send the output to the <grep> utility.


There are many utilities that will work for this. Don't worry about knowing all at once. Over time, it gets easier to put them together when you need specific information. You can also use three or more utilities at once with pipelines as long as nothing conflicts.


 Background Commands


 You can run commands or utilities in the foreground or background. Most of
 your commands will be in the foreground. There are good times when you want
 to run them in the background though.

If a command will take a long time to run, then it is a good candidate to run in the background.


The reason you would want to do this is that it frees up your shell for your to run other commands and do other tasks. When you run a command in the background, it is now a job.

The shell keeps track of it and assigns it a job number. You can even query this job number to check on the progress of the job.


You use the <&> sign to indicate the current command is to run in the background. One thing I do a lot is update computers on my network. I have a script I wrote for this.


 updates.sh &


 This will run my script in the background as a job. I can do other things because it is going to take a long time. This makes it very useful.


 To use an earlier example, you can do it with whatever you need to print.

 

ls | lpr &


Again, this throws the output of <ls> into the <lpr> print utility and prints everything in the background.

Commands can have options and arguments that you use after the command. These will modify the behavior of the command itself. When you enter a command it needs to listed in the path variable or you need to be in the current directory of the program.

You can chain commands through the use of pipelines. Pipelines use the <|> symbol. They take the output of the first command and send it to the input of the second command.

Commands can also be run in the background. This is another useful feature that will enhance your productivity. If there is a long task to run, start it and have it run in the background.

It will go away from sight but still be running. You can then use your shell to do other tasks like create new users or modify permissions on files.   

Filtering Text In Linux

Filtering text allows you to do many efficient tasks in Linux. Displaying and sorting text is one of the most common tasks that you will do. This section is an introduction to filters in order to create pipelines for your workflow.

Introduction

Filtering text is the process of capturing text, doing something with it, and then sending it to the output stream. Most commonly, the output from one command is taken and redirected to the input of another command. This is usually accomplished through pipes and stream operators.

Streams

A stream is a series of data. There are input and output streams. Data flows both ways. Streams can be sent to a terminal, a file, or a network device. There are three main types:

  • stdin
  • stdout
  • stderr

The first, stdin, sends input to commands. Next, stdout, displays output from commands. Then, stderr, shows errors that were produced.

Pipes

The pipe symbol, “|”, is one way to redirect output from one command to the input of another. Input can come from a command or a file. You can make a long sequence of commands using pipes. The output is usually shown in the terminal.

Output Redirection

The operator, “>”, can send output to a file. This is what you want to do if you need to save the results. Once you have data in a file, you have many more options. You can show the contents of a file, see any special characters associated with it, and split a file into two pieces.

The Cat Command

The cat command can show the contents of a file and create files. By default, it reads from stdin unless you specify a file to read from.

echo -e "1 teamup\n2 unbroken_bonds\n3 unified_minds\n4 cosmic_eclipse" > edition.txt

$ cat edition.txt
1 teamup
2 unbroken_bonds
3 unified_minds
4 cosmic_eclipse

In the first snippet we just sent some data to a text file that we created at the same time. Then we showed the contents of the file in the second snippet. This shows you how it works.

Let's make a second file now.

echo -e "1 breakpoint\n2 breakthrough\n3 ultra_prism\n4 celestial_storm" > edition2.txt

Make sure the output is what we expect.

$ cat edition2.txt
1 breakpoint
2 breakthrough
3 ultra_prism
4 celestial_storm

The cat command also concatenates files. It just so happens that we have two files, ready for joining.

$ cat edition*

The asterisk is short for getting everything that has the partial name of "edition". 

1 teamup
2 unbroken_bonds
3 unified_minds
4 cosmic_eclipse
1 breakpoint
2 breakthrough
3 ultra_prism
4 celestial_storm

This sends everything in those two files to the screen output. We can do something else cool, we can just make a third file with the contents of the first two.

$ cat edition.txt edition2.txt > edition3.txt

This makes a third file that contains the contents of the first two.

$ cat edition3.txt
1 teamup
2 unbroken_bonds
3 unified_minds
4 cosmic_eclipse
1 breakpoint
2 breakthrough
3 ultra_prism
4 celestial_storm

That is really useful text manipulation. This also showcases the flexibility of the "cat" command.

Wordcount Command

We can use this utility, "wc", to get more information from a file. This is handy if we know nothing about a file.

$ wc edition3.txt
  8  16 119 edition3.txt

We used this on the file we just created. It shows us the lines, words, and bytes in the file. It is very nice if you need to examine a file. The file may be thousands of lines long, you don't want all of that in your terminal output. If it is huge like that, you have another option.

Tail Command

The tail command can show you the last lines of a file. By default, it shows you the last ten lines.

$ tail edition3.txt

My file is small but if it was large, that is the usage you would want to try first. 

Head Command

The head command is the same as tail, except it shows you the first lines of a file. It is used in the same way. 

Working With Files

The following commands are what makes working with the command line worth it.
All of these tasks can be performed in a graphical environment, but when you get
used to the command line, they become much faster.

Wildcards
Wildcards is one of the things that makes the command line so strong. They give
us a lot of flexibility. Wildcards allow you to select filenames based on
patterns of characters.
*     matches any characters
?     matches any single character
Using wildcards makes it possible to create complicated search queries. 
*       all files
a*      any file begining with a
a*.txt  any file beginning with a followed by characters and ending with .txt
file??? any file beginning with the name file and followed by exactly 3
        characters.
Wildcards can be used with any command that accepts filenames as arguments.

Creating Directories
The mkdir command is used to create directories.
mkdir directory-name
We can also make several directories at once.
mkdir name1 name2 name3 name4 name5

Copying Files
The cp command is what we use to copy files or directories. 
cp file1 file2
This copies the single file to another file.
cp -a file1 file2
The option -a copies a file with all of its attributes to another file.
cp -i file1 file2
The option -i will prompt the user for confirmation when overwriting a file.
cp -r folder1 folder2
The -r option will copy folders and all of their contents. 
cp -u file1 file2
The -u option will only copy files that do not exist or are newer than the
existing correspondinf files in the destination directory.
cp -v folder1 folder2
The -v option will display extra information as copying is done.

Moving Files
We move and rename files with the mv command. So, the mv command can be used in
multiple ways. 
mv file1 file2
This will rename file1 to file2.
mv file1 folder1
When used like this, it moves file1 to folder 1. 
mv -i file1 file1
The -i option will confirm you want to complete the action.
mv -u file1 file2
This will again only move files that do not exist or are newer than the files in
the destination folder.
mv -v file1 folder1
The -v will also give extra information when moving file1 to folder 1.

Removing Files
The rm command is used to remove files and folders.
rm file1
That will remove a file.
rm -i file1
This will ask for confirmation because of the -i option.
rm -r folder1
This will remove a folder and all of its subdirectories. You must use this
option to delete folders. 
rm -v file1
The -v gives extra information when performing this task.

Creating Links
We create links using the ln command. Links can be either hard or soft. Hard
links are an older way of doing things, while soft links are the modern way. 
This creates a hard link:
ln file link
This creates a soft link:
ln -s file link
ln -s folder link
As you can see, you cannot make a hard link of a folder or directory. That must
be done with a soft link. 

Every file has a hard link associated with it. When a hard link is created, we
are making another way to refernce the file. Hard links cannot reference
anything outside its original file system. It also cannot reference a directory. 

Soft links were made to overcome the limitations of hard links. When you create
a soft link, you are creating a unique file that contains a pointer to the
original file or directory. When you wrtie information to the soft link, the
original file is updated. So unless you go looking, it is hard to tell the
difference between the two. However, when you delete the link, the original file
is untouched. If the file is deleted first, the link stays but points to
nothing. 

Commands in Linux

In Linux, a command can be a program, something in the shell, a shell function,
and an alias. Programs are those in the /usr/bin directory. There can be many
different types.

Commands in the shell are built in to the shell. A shell
function is a small script that does something useful, hopefully. Aliases are
commands that we make ourselves, that come from other commands.

Type
It can be useful to know what kind of command you are using. You can find out by
typing:


type free or
type dnf

You could get a different result for each command, depending on what you type.
The reason is, as mentioned above, there are a few different types of commands.
So, don't freak out when you see multiple types. 

Which
The which command gives you the location of an executable.

 
which free

It only works for executable programs.

Documentation
We can now get the documentation for a command. Use "help" for the built in
commands.


help cd

It will give a description of what the command does as well as options. Also,
when square brackets appear in the description of a command's syntax, they
indicate optional items. A vertical bar character indicates mutually exclusive
items. There is a help option after these commands, so you can get help either
way you like.


free --help

This gives you usage and options related to the command in question.

Man
Most programs will have a manual page. It can be abbreviated as "man".


man free

This will give you almost everything related to the "free" command. Probably
more than you care to know, honestly. Just know it is available. Most do not
provide examples and are just a reference. 

Apropos
This will display appropriate commands related to a search term.

 
apropos free

This will give different man pages that might be helpful. The first column is
the name of the man page and afterwards, a description. 

Whatis
This command will display one line manual page descriptions.


whatis dnf

It is a simplified view but can be useful.

Info
This will display a program's info entry.

 
info dnf

It gives you a lot of information but it is well formatted. It contains
hyperlinks to help you move around in the directory structure. use page up or
page down to move quickly. Hit enter with a hyperlink selected. Then, Q to quit
the info program.

Readme Files
A lot of software packages that are installed on your system have documentation
files. These files are located in the /usr/share/doc directory. Most of these
are stored in text format and can be viewed with the less command. Some are also
in HTML format and can be viewed in a web browser. 

Creating Aliases
We can create our own commands, or aliases, for other commands and associated
options. The first thing we do is see if there is an alias for a command you are
thinking of. If I want to check for freem, for free memory, I would type:


type fr

It will say not found if it is available. So, you might use the same aliases on
every system you are on. Sometimes I can't remember what I have done on a
system, that is why it is useful to check. To make an alias:


alias fr='free -h'

As you can see, we aren't just making a shorter command to type less. We
included an option there too. We are typing a lot less when we do this command
several times a day. We can again use the type command and we can see our alias
now.

 
type fr

You can see what we just did, which is cool! The aliases will go away when your
session ends, so remember that. We will go over how to make them permanent
later, which is very useful. 

Rsync

Rsync stands for remote synchronization and it transfers and syncs files. It can do this locally and remotely. It is a very good tool. Though it has a learning curve, it is not hard to pick up. Its main use is to copy files and directories between two different computers. It can look at files and only send what has been changed. It can preserve all kinds of links and metadata.    

Installing Rsync     

If you do not already have it installed on your system, you will need to install it. I am running Fedora. If you are running another distribution, use whatever package manager you have to install it.

 

On Fedora run:

dnf update -y

 

This will update your files. Then:

dnf install rsync -y

 

This will install rsync to your system if it is not already there.

 

Now run:

which rsync

 

This will show you where it is installed on your system

Then run:

rsync –version

 

That shows you the version you have.

 

Copying Files

Copying files is really easy. It is:

 

rsync -v source destination

 

The -v option means output will be given verbosely

Source is the full path of the source file unless you are in its directory already.

Destination should be the full path unless it is in your current path too.

It looks like this:

 

rsync -v program1.cpp Documents

 

In the above example I was already in the directory of the file I wanted to copy. You should do that when you can. I transferred it to the Documents folder.

Another example that is slightly different:

 

Rsync -av /home/jason/documents /home/jason/Writing/

 

This command copies all the files in Documents to my Writing folder.

You can use the ls command to look and make sure everything is transferred as expected.

 

Ls Writing/

 

There are many reasons to make copies of your files. Backing up important files to another remote location is something we should all do more.

 

Whenever you do a file transfer, it is a good idea to switch to that location and make sure it is copied over. Doing this a few times will instill confidence in your command line abilities.

 

The Trailing /

The trailing slash at the end of a path dictates whether rsync will copy the contents of a directory or the entire directory with the folder included. Excluding the / from the source path copies the directory to the source destination.

 

# This command will copy the Writing directory and its contents to the backup drive

Rsync -avz /home/jason/Writing /path/BackupDrive/

 

# This command will only copy the files in the Writing directory to the backup drive.

Rsync -avz /home/jason/Writing/ /path/BackupDrive

 

This is a small difference but it is very important to get right.

 

Copying Contents of Directories

It is often very useful to copy entire directories at once. It is easy to do this. Use:

 

Rsync -av     /source/     /destination/

 

Just use the full paths of the source and destination

So, something like this should get the job done:

 

Rsync -av /home/jason/Documents/ /home/jason/Backup/

 

Copying Directories to other Directories

If we want to copy a folder to another folder then we do this:

 

Rsync -av /home/jason/Documents /home/jason/Backup/

 

You should look inside the directory to make sure you typed the command over correctly. You should see the folder nested in there.

 

Copying A File Remotely

Rsync lets you connect to different machines. This makes copying files to other machines an easy practice. You will need:

  1. File path from local machine
  2. IP address of remote machine
  3. File path on remote machine
  4. Root access to remote machine

The command will look something like this depending on what you need to do:

 

Rsync -v /path/from/local/machine     [email protected]:/root/remote/path

 

Copying Directory To Another Drive

This is very handy and gives you better protection. It is also easy to implement. 

 

Rsync -av /home/jason/Writing /path/BackupDrive

 

As usual, go and look to make sure everything happened the way you expect. After a while, you will not feel the need to do this.

 

Copying Directories Remotely

Rsync can handle remote directories just as easily as single files. When you run this command, you will be asked for its password. So, be prepared on this front. The command looks like this:

 

Rsync -av   /local/path   [email protected]:/root/remote/path/

 

Compressing Files

Rsync can compress files that it tries to transfer. This will speed up a transfer. If your transfer is very small, you will not see a difference. However, if you are doing lots of video, for example, this will be of great benefit. Do it like this:

 

Rsync -avz /home/jason/video /path/BackupDrive/

 

This command will copy the Video folder over to my backup drive.

 

Monitoring Your Progress

If we are doing a long transfer, we can monitor the progress. I like statistics so this is useful for me. The command looks like this:

 

Rsync -avz –info=progress2  /home/jason/Video /path/BackupDrive/

 

This will give you the results of your transfer.

 

Syncing Directories

Syncing directories is easily done. Keep in mind that sometimes files will be deleted and they will be gone. So, use this command after careful consideration. We use the –delete option with the regular command plus source and destination paths. This will look at the source directory and then make the destination directory match it. It looks like this:

 

Rsync -aP –delete /home/jason/Writing/ /path/BackupDrive/Writing/

 

Excluding Files and Directories

Rsync can easily look the other way during a command if you want it to. So, if I want to exclude a subfolder of my Writing folder, it will do that. Here is how.

 

Rsync -avzP –exclude=Algebra /home/jason/Writing /path/BackupDrive

 

We can also exclude files from a transfer or sync operation. If I want to exclude .mp3 files it looks like this:

 

Rsync -avzP –exclude=*.mp3 /home/jason/Music/ /path/BackupDrive

 

Options

  • -a = –archive mode and equal to several other flags at once. It tells rsync to sync recursively,transfer special and block devices, preserve symbolic links,modification times, groups, ownership, and permissions
  • -z = –compress. This option compresses the data that is sent to the destination machine.
  • -P = –partial and –progress. Using this option shows a progress bar during the transfer and keeps track of partially transferred files.
  • –delete. When you use this option, it will delete extra files from the destination folder that are not in the source folder. It is how you mirror directories.
  • -q or –quiet. Use this when you don’t want to see error messages
  • -e. Use this when you want to choose the remote shell to use

Input and Output Redirection

Standard Input and Output
Many of the programs we have used so far produce output of some kind. This
output consists of two types. The programs's results like when the data is
designed to produce something. It also produces status and error messages that
tell us about the program in question.

If we look at a command like ls, we can see that it displays its results and its
error messages on screen. Programs such as ls send their results to a special
file called standard output and their status messages to another file called
standard error. By default, both standard output and standard error are linked
to the screen and not saved into a disk file. In addition, many programs take
input from a facility called the standard input, which by default is attached to
the keyboard.

Input and output redirection allows us to change where output goes and where
input comes from. Normally, output goes to the screen and input comes from the
keyboard, but with redirection, we can change that.

Redirecting Output
Redirection allows us to redefine where standard output goes. To redirect
standard output to another file instead of the screen, we use the redirection
operator ">" followed by the name of the file. 

ls -l /usr/bin > ls-output.txt

Here, we created a long listing of the /usr/bin directoryt and sent the results
to the ls-output.txt file. If it is a long file we can use the less command:

less ls-output.txt

If we want to append information to the file instead of reqriting it, we use the
">>" redirection operator.

ls -l /usr/bin >> ls-output.txt

Using the >> operator will result in the output being appended to the file. If
the file does not exist, it is created.

Redirecting Standard Error
Redirecting standard error lacks the ease of a dedicated redirection operator.
To redirect standard error, we must refer to its file descriptor. A program can
produce output on any of several numbered file streams. While we have referred
to the first three of these file streams as standard input, output, and error,
the shell references them internally as file descriptors 0, 1, and 2. The shell
provides a notation for redirecting files using the file descriptor number.
Because standard error is number 2, we can redirect standard error like this:

ls -l /bin/usr 2> ls-error.txt

The file descriptor 2 is placed immediately before the redirection operator to
perform the redirection of standard error to the file ls-error.txt. There are
cases in which we may want to capture all of the output of a command to a single
file. To do this, we must redirect both standard output and standard error at
the same time. 

ls -l /usr/bin > ls-output.txt 2>&1

Using this method, we perform two redirections. First we redirect standard
output and then we redirect file descriptor 2 to file descriptor 1 using the
notation 2>&1.

Sometimes, we do not want output from a command. This usually applies to error
and status messages. The system provides a way to do this by redirecting output
to a special file called /dev/null. This file is a system device often referred
to as a bit bucket, which accepts input and does nothing with it. 

ls -l /usr/bin 2> /dev/null

Redirecting Standard Input
Up to now, we have not encountered many commands that make use of standard
input. The "cat" command reads one or more files and copies them to standard
output.

cat filename

You can use it to display files without paging. 

cat ls-output.txt

It is often used to display short text files. Because "cat" can accept more than
one file as an argument, it can also be used to join files together.

Pipelines
The capability of commands to read data from standard input and send to standard
output is utilized by a shell feature called pipelines. Using the pipe operator
|, the standard output of one command can be piped into the standard input of
another.

ls -l /usr/bin | less

Using this technique, we can conveniently examine the output of any command that
produces standard output.

Pipelines are often used to perform complex operations on data. It is possible
to put several commands together into a pipeline. Frequently, the commands used
in this way are referred to as filters. Filters take input, change it, then
output it. 

ls /bin /usr/bin | sort | less

Because we specified two directories, the output of ls would have consisted of
two sorted lists, one for each directory. By including sort in our pipeline, we
changed the data to produce a single sorted list.

The "uniq" command is often used in conjunction with "sort". It accepts a sorted
list of data from either standard input or a single filename argument then
removes any duplicates from the list. 

ls /bin /usr/bin | sort |uniq | less

We use "uniq" to remove any duplicates from the output of the "sort" command. If
we want to see the list of duplicates, we add the "-d" option to "uniq".

ls /bin /usr/bin | sort | uniq -d | less

The "wc" command is used to display the number of lines, words, and bytes
contained in files.

wc ls-output.txt

In this case, it prints out three numbers: lines, words, and bytes. Like our
previous commands, if executed without command line arguments, "wc" accepts
standard input. The "-l" option limits its output to report only lines. Adding
it to a pipeline is a handy way to count things. To see the number of items we
have in our sorted list we can do this:

ls /bin /usr/bin | sort uniq | wc -l

The command "grep" is a powerful program used to find text patterns within
files. It is used like this:

grep pattern filename

When "grep" encounters a pattern in the file, it prints out the lines containing
it. The patterns that "grep" can match can be very complex. Suppose we wanted to
find all the files in our list of programs that had the word zip embedded in the
name. Such a search might give us an idea of some of the programs on our system
that had something to do with file compression.

ls /bin /usr/bin | sort | uniq | grep zip

There are a couple handy options for "grep".
The option "-i" causes "grep" to ignore case when performing the search.
the option "-v" tells "grep" to print only those lines that do not match the
pattern. 

Sometimes, you do not want all the output from a command. You might want only
the first few lines or the last few lines. The "head" command prints the first
10 lines of a file, and the "tail" command prints the last 10 lines. By default,
both commands print 10 lines of text, but this can be adjusted with the "-n"
option.

head -n 5 ls-output.txt

The "tail" command operates the same way:

tail -n 5 ls-output.txt

The "tail" command also has an option to let you view files in real time. This
is useful for watching the progress of files as they are being written. 

tail -f /var/log/messages

Using the "-f" option, "tail" continues to monitor the file, and when new lines
are appended, they immediately appear on the display. This continues until you
type "ctrl-c".

The "tee" command reads standard input and copies it to both standard output and
to one or more files. This is useful for capturing a pipeline's contents at an
intermediate stage of processing. 

ls /usr/bin | tee ls.txt | grep zip

As always, check out the documentation of each of the commands we have covered.
We have seen only the most basic usage but have a number of interesting options.
You will see that the redirection feature of the command line is very useful for
solving specialized problems.

Permissions

Ownership of Files
Sometimes, when we try to access a file, we do not have permission to do so.
This can be a read or write permission, for example. In Unix and Linux, a user
may own files and directories. When a user owns a file or directory, the user
has control over its access. Users can belong to a group consisting of one or
more users who are given access to files and directories by their owners. In
addition to granting access to a group, an owner may also grant some set of
access rights to everybody. To find out details about yourself on the system,
use the "id" command.
 
id
 
When user accounts are created, users are assigned a number called a user ID,
which is then mapped to a username. The user is assigned a group ID and may
belong to other groups. 
 
This information comes from certain text files in Linux. User accounts are
defined in the /etc/passwd file, and groups are defined in the /etc/group file.
When user accounts and groups are created, these files are modified along with
/etc/shadow, which holds information about the user's password. 
 
For each user account, the /etc/passwd file defines the user login name, user
ID, group ID, and account's real name, home directory, and login shell. When we
look at the contents of /etc/passwd and /etc/group, we see that besides the
regular user accounts, there are accounts for the superuser and other system
users.
 
Reading, Writing, and Executing
Access rights to files and directories are defined in terms of read access,
write access, and execution access. If we look at the output of the ls command,
we can get some clue as to how this is implemented.
 
ls
 
The first 10 characters of the listing are the file attributes. The first of
these characters is the file type. 
 
  • - a     regular file
  • d       a directory
  • l        a symbolic link
  • c       a character special file
  • b       a block special file
 
The remaining 9 characters of the file attributes, called the file mode,
represent the read, write, and execute permissions for the file's owner, the
file's group owner, and everyone else.
 
  • r     allows a file to be opened and read
  • w    allows a file to be written to
  • x     allows a file to be treated as a program and executed
 
Change File Mode
To change the mode or permission of a file or directory, use the "chmod"
command. Only the file's owner or the superuser can change the mode of a file or
directory. This command supports two distinct ways of specifying mode changes.
They are octal number representation and symbolic representation.
 
We will cover octal number representation first. With octal notation, we use
octal numbers to set the pattern of desired permissions. Because each digit in
an octal number represents 3 binary digit, this maps nicely to the scheme used
to store the file mode. By using 3 octal digits, we can set the file mode for
the owner, group owner, and everyone else.
 
chmod 600 example.txt
 
By passing the argument 600, we were able to set the permissions of the owner to
read and write while removing all permissions from the group owner and everyone
else. Though remembering the octal to binary mapping may seem inconvenient, you
will usually have to use only a few common ones.
  • 7     rwx
  • 6     rw
  • 5     r-x
  • 4     r--
  • 0     ---
 
Chmod also supports a symbolic notation for specifying file modes. Symbolic
notation is divided into 3 parts.
who the change will affect
which operation will be performed
what permission will be set
To specify who is affected, a combination of the characters u,g,o, and a is
used.
 
  • u     file or directory owner
  • g     group owner
  • o     everyone else
  • a     all, short for u,g, and o
 
If no character is specified, all will be assumed. The operation may be a+
indicating that a permission is to be added, a- indicating that a permission is
to be taken away, or  a= indicating that only the specified permissions are to
be applied and that all others are to be removed.
 
Some people prefer to use octal notation and some like the symbolic. Symbolic
notation does offer the advantage of allowing you to set a single attribute
without disturbing any of the others.
 
Setting Umask
The umask command controls the default permissions given to a file when it is
created. It uses octal notation to express a mask of bits to be removed from a
file's mode attrributes.
 
When we set the mask to 0000 we are turning it off. This makes a file writable
by anyone.
 
Changing Identities
Sometimes, we need to become another user. This is often done to test an account
or figure out what is wrong for a certain user. We can log in as the user, use
the "su" command in the terminal, or use the "sudo" command in the terminal.
These all do things differently. The "su" command allows you to assume the
identity of another user and either start a new shell session with that user's
ID or issue a single command as that user.
 
The "sudo" command allows an administrator to set up a configuration file called
/etc/sudoers and define specific commands that particular users are permitted to
execute under an assumed identity. This means the administrator can configure
"sudo" to allow an ordinary user to execute commands as a different user in a
controlled way. A user may be restricted to one or more specific commands and no
others. An important difference is that the use of "sudo" does not require
access to the superuser's password. 
 
Changing Passwords
To set or change a password, use the "passwd" command.
 
passwd username
 
To change your password, just enter the "passwd" command. You will be prompted
for your old password and then your new password. The command will try to
enforce the use of strong passwords. This means it will refuse to accept
passwords that are too short or are too similar to previous passwords, are
dictionary words, or are too easily guessed.
 
If you have superuser privileges, you can specify a username as an argument to
the "passwd" command to set the password for another user. Other options are
available to the superuser to allow locking, password expiration, and other
things. 

Processes

Intro
Prcoesses are how Linux organizes the different programs waiting for their turn
at the cpu.

How processes Work
When a system starts up, the kernel initiates a few of its own activities as
processes and lauches a program called init. It runs a series of shell scripts
(located in /etc) called init scripts, which start all the system services. Many
of these services are implemented as daemon programs, programs that just sit in
the background and do their thing without having any user interface. So, even if
we are not logged in, the system is at least a little busy performing routine
stuff.

The fact that a program can launch other programs is expressed in the process
scheme as a parent process producing a child process. The kernel maintains
information about each process to help keep things organized. For example, each
process is assigned a number called a process ID. PID's are assigned in
descending order, with init always getting PID 1. The kernel also keeps track of
the memory assigned to each process, as well as the processes' readiness to
resume execution. Like files, processes also have owners and user IDs, effective
user IDs, and so on.

Viewing Processes
The most commonly used command to view processes is 'ps'. The 'ps' program has a
lot of options but it is used like this:

ps

The result in this example lists two processes, which are bash and ps. As we can
see, ps does not show us very much, just the processes associated with the
current terminal session. To see more, we need to add some options. 

If we add an option, we can get a bigger picture of what the system is doing.

ps x

Adding the x option tells ps to show all of our processes regardless of what
terminal they are controlled by. The presence of a ? in the TTY column indicates
no controlling terminal. Using this option, we see a list of every process that
we own.

Because the system is running a lot of processes, ps produces a long list. It is
often helpful to pipe the output from ps to less for easier viewing. Some option
combinations also produce long lines of output, so maximizing the terminal
emulator window might be a good idea too.

Processor States
R       Running
S       Sleeping
D       Uninterruptible sleep
T       Stopped
Z       Defunct or zombie process
<       High priority process
N       Low priority process

The process state may be followed by other characters. These indicate various
exotic process characteristics. Another popular set of options is aux. This
gives us even more information. 

ps aux

This set of options displays the processes belonging to every user. Using the
options without the leading dash invokes the command with BSD style behavior.
The linux version of ps can emulate the behavior of the ps program found in
different unix implementations.

Viewing Processes Dynamically
While the ps command can reveal a lot about what the machine is doing, it
provides only a snapshot of the machine's state at the moment the ps command is
executed. To see a more dynamic view of the machine's activity, we use the top
command.

top

The top program displays a continuously updating display of the system processes
listed in order of process activity. The name top comes from the fact that the
top program is used to see the top processes of the system. The top display
consists of two parts, a system summary at the top of the display, followed by a
table of processes sorted by cpu activity. 

The top program accepts a number of keyboard commands. The two most interesting are h, which displays the program's help screen, and q, which quits top. 

Both major desktop environments provide graphical applications that display
information similar to top, but top is better than the graphical versions
because it is faster and it consumes far fewer system resources. After all, our
system monitor program should not be the source of the system slowdown that we
are trying to track.

Controlling Processes
Now that we can see and monitor processes, let us gain some control over them.
Type gedit to open that program. Now type control-c to interrupt the program. It
should close. Most command line programs can be closed in this way. 

If we want the shell program back on top but still want another program to run,
we can put it into the background. To launch a program so that it is immediately
placed in the background, we follow the command with a & character.

gedit &

After entering the command, the window appeared and the shell prompt returned. A message will appear which is a shell feature called job control. With this
message, the shell is telling us we have started a job number and what its PID
is. If we then run ps, we can see this process.

The shell's job control facility also gives us a way to list the jobs that have
been launched from our terminal. Using the jobs command, we can see this list.

jobs

This shows us a list of the current jobs running.

Returning a Process to the Foreground
A process in the background is immune from terminal keyboard input, including
any attempt to interrupt it with control-c. To return a process to the
foreground, use the fg command:

// run jobs command to get the job number of process you want
jobs
// then use the job number with the fg command
fg %1    // 1 is the job number we want to bring to foreground

The fg command followed by a percent sign and then job number is how you use it. If we have only one background, the job number is optional.

Stopping A Process
Sometimes we will want to stop a process without terminating it. This is the
same as pausing it. This is often done to allow a foreground process to be moved
to the background. To stop a foreground process and place it in the background,
press control-z.

After stopping the process, we can verify that the program has stopped by trying
to use it. It will not work. We can either continue the program's execution in
the background, using the fg command, or resume the program's execution in the
background with the bg command. 

bg %1

As with the fg command, the job number is optional if there is only one job.
Moving a process from the foreground to the background is handy if we launch a
graphical program from the command line but forget to place it in the background
by appending the trailing &.

There are two reasons why we would want to launch a graphical program from the
command line. The program you want to run might not be listed on the window
manager's menu. By launching a program from the command line, you might be able
to see error messages that would otherwise be invisible if the program were
launched graphically. 

Sometimes, a program will fail to start up when launched from the graphical
menu. By launching it from the command line instead, we may see an error message
that will reveal the problem. Also, some graphical programs have interesting and
useful command line options.

Signals
The kill command is used to kill processes. This allows us to terminate programs
that need killing.

kill 16606

We get the process ID from one of the ways mentioned before, then use that ID
with the kill command. We could have also used a job number if we wanted to go
that route. 

While this is straightforward, there is more to it than that. The kill command
does not exactly kill processes, it sends them signals. Signals are one of
several ways that the operating system communicates with programs. We have
already seen signals in action with the use of control-c and control-z.

When the terminal receives one of these keystrokes, it sends a signal to the
program in the foreground. In the case of control-c, a signal called INT
(interrupt) is sent. Programs, in turn, listen for signals and may act upon them
as they are received. The fact that a program can listen and act upon signals
allows a program to do things such as save work in progress when it is sent a
terminating signal.

Sending Signals to Processes
The kill command is used to send signals to programs. Its most common syntax is
this:

kill -signal (PID)

If no signal is specified on the command line, then the terminate signal is sent
by default. 

1        hup        hang up
2        int        interrupt
9        kill    kill
15        term    terminate
18        cont    continue
19        stop    stop
20        tstp    terminal stop

kill -1 16606

In this example,  sent the process a hup singal with the kill command. It
terminates and the shell indicates that the background process has received a
hang up signal. We may need to press enter a couple of times before the message
appears. Note that the signal may be specified either by number or by name.

Sending Signals to Multiple Processes
It is also possible to send signals to multiple processes matching a specified
program or username by using the killall command.

killall -u user -signal name

To demonstrate, we will start a couple of instances of programs and terminate
them.

killall gedit

As with the kill command, you must have superuser privileges to send kill
signals to processes that do not belong to you.

Shutting Down the System
The process of shutting down the system involves the orderly termination of all
the processes on the system, as well as performing some vital housekeeping
chores before the system powers off. Four different commands can perform this
function.

halt
poweroff
reboot
shutdown

The first three are self explanatory and are generally used without any command
line options. The shutdown command is more interesting. With it, we can specify
which of the actions to perform and provide a time delay to the shutdown event.
Most often it is used like this to halt the system:

shutdown -h now

We can also use it like this:

shutdown -r now

The delay can be specified in a variety of ways. Once the shutdown command is
executed, a message is broadcast to all logged in users warning them of the
impending event. Because monitoring processes is an important system
administration task, there are a lot of commands for it.

pstree        outputs a process list arranged in a tree like pattern
vmstat        outputs a snapshot of system resource usage
xload        graphical program that draws a graph showing system load over time
tload        same as xload but draws the graph in the terminal

Most modern systems feature a mechanism for managing multiple processes. Linux provides a rich set of tools for this purpose. Unlike other systems, Linux
relies primarily on command line tools for process management. Though there are
graphical process tools for Linux, the command line tools are greatly preferred
because of their speed and light footprint.

Environment Variables

The shell holds information about our session and this is called the
environment. Programs use this information to determine our system
configuration. Many programs will use configuration files to store program data
but they will also look to the environment.

Information in the Environment
The shell stores two basic types of data in the environment, environment
variables and shell variables. Shell variables are bits of data placed there by
bash and environment variables are everything else. In addition to variables,
the shell stores some programmatic data like aliases and shell functions.

To see what is stored in the environment, we can use either the set builtin in
bash or the 'printenv' program. The set command will show both the shell and
environment variables, while 'printenv' will display only the latter. Because
the list of environment contents will be fairly long, it is best to pipe the
output of either command into less.

printenv | less

What we see is a list of environment variables and their values. For example, we
see a variable called user, which contains the value 'me'. The 'printenv'
command can also list the value of a specific variable.

printenv user

The set command, when used without options or arguments, will display both the
shell and environment variables, as well as any defined shell functions. Unlike
'printenv', its output is nicely sorted in alphabetical order.

set | less

It is also possible to view the contents of a variable using the echo command:

echo $HOME

One element of the environment that neither 'set' nor 'printenv' displays is
aliases. To see them, enter the alias command without arguments.

alias

You will see all the defined aliases in your environment.

The environment contains quite a few variables, and though the environment will
differ from system to sytem, we will likely see the most common variables.

Environment
When we log on to the system, the bash program starts and reads a series of
configuration scripts called startup files, which define the default environment
shared by all users. This is followed by more startup files in our home
directory that define our personal environment. The exact sequence depends on
the type of shell session being started. There are two kinds.

There is a login shell session. This is one in which we are prompted for our
username and password. This happens when we start a virtual console session, for
example. There is also a non-login shell session. This typically occurs when we
launch a terminal session in the GUI. login shells read one or more startup

files:

/etc/profile
~/bash.login
~/.profile

Non-login shell sessions read these startup files:


/etc/bash.bashrc

~/.bashrc

In addition to reading the startup files, non-login shells inherit the
environment from their parent process, usually a login shell. Take a look and
see which of these startup files are installed. Most are hidden so we will need
to use the '-a' option when using the 'ls' command. 

The ~/.bashrc file is probably the most important startup file from the ordinary
user's point of view, because it is almost always read. non-login shells read it
by default, and most startup files for login shells are written in such a way as
to read the ~/.bashrc file as well.

Startup Files
Lines that begin with a # are comments and are not read by the shell. These are
there for human readability. The first interesting thing occurs below:

if [-f ~/.bashrc]; then
    . ~/.bashrc
fi

This is called an 'if' compound command. It is saying that fi there is a
~/.bashrc file then read it. 

We can see that this bit of code is how a login shell gets the contents of
.bashrc. The next thing in the startup file has to do with the 'path' variable.
The 'path' variable tells the shell where to find commands when we enter them on
the command line. The 'path' variable is often set by the /etc/profile startup
file with this code:

PATH=$PATH:HOME/bin

PATH is modified to add the directory $HOME/bin to the end of the list. Many
distributions provide this PATH setting by default. 

We also have this command to 'export' our path.

export PATH

The export command tells the shell where the startup files are and what they
contain, we can modify them to customize our environment.

Modifying the Environment
Because we know where the startup files are and what they contain, we can modify
them to customize our environment. As a general rule, to add directories to your
PATH or define additional environment variables, place those changes in
.bash_profile. For everything else, place the changes in .bashrc.

Unless you are the system administrator and need to change the defaults for all
users of the system, restrict your modifications to the files in your home
directory. It is certainly possible to change the files system wide but it is
safer to not do so.

Text Editors
To edit the shell's startup files, as well as most of the other configuration
files on the system, we use a program called a text editor. A text editor is a
program that allows us to edit words on the screen. It differs from a word
processor by only supporting pure text and often contains features designed for
writing programs. Text editors are the central tool used by software developers
to write code and by system administrators to manage the configuration files
that control the system.

A lot of different text editors are available for linux so most systems have a
few installed by default. Text editors fall into two basic categories, graphical
and text-based. 

There are many text-based editors. The popular ones we will encounter are nano,
vi, and emacs. The nano editor is a simple easy to use editor designed as a
replacement for the pico editor. The vi editor has mostly been replaced by vim,
is the traditional editor for unix and linux systems. The emacs editor is an
all-purpose programming environment installed on most linux systems by default. 

Using a Text Editor
Text editors can be invoked from the command line by typing the name of the
editor followed by the name of the file you want toe dit. If the file does not
already exist, the editor will assume that we want to create a new file.

vim filename

This command will start the vim editor and load the file named 'filename'.

Graphical text editors are pretty self-explanatory, think word processor but
just for plain text. Programmers do not want a word processor because it would
not work well for programming and could mess up configuration files with its
formatting of text. 

Comments in Files
Whenever you modify configuration files, it is a good idea to add comments to
document your changes. shell scripts and bash startup files use a # symbol to
begin a comment. Other configuration files could use a different symbol but they
do the same thing. 

You will often see lines in configuration files that are commented out to
prevent them from being used by the affected program. This is done to give the
reader suggestions for possible configuration choices or examples of correct
configuration syntax. 

Activating Our Changes
The changes we have made to our .bashrc or other configuration files will not
take effect until we close our terminal session and start a new one because the
.bashrc file is read only at the beginning of a session. however, we can force
bash to reread the modified .bashrc file with the following command:

source ~/.bashrc

After doign this, we should be able to see the effect of our changes.