overview edit doc new doc remove

Jan 26, 2018

updated at: May 28, 2019

LPIC 1 - Linux Administrator


In the beginning AT&T created UNIX due to monopoly laws, they weren't allow to sell it, Ken Thompson and Dennis Ritchie developed in 1969 the UNIX operating system, it was later rewritten in C to make it more portable and eventually became a widely used operating system.

A lot of organizations made their own variants of UNIX, most were commercial, but the University of California at Berkeley made a noncommercial version called BSD.

In 1983 Richard Stallman started working on the GNU (GNU is Not UNIX) project. GPL (General Public License) was also created as a result of this together with the FOSS movement, nowadays FLOSS (Free/Libre and Open Source Software). But their kernel called Hurd never came to completion, however this is the most important part in an operating system.

In 1991 Linus Torvalds started developing the Linux Kernel, a Unix-like kernel. GNU/Linux was born. GNU/Linux is licensed under the GNU license. Essentially, GNU and Linux are "knockoffs" of UNIXes like BSD, very high-quality knockoffs.

OS X adopted BSD's tools but threw out its kernel.

A small thing you need to understand before you start with this guide is that everything in Linux is a file, well actually this is an oversimplification but it will help you understand how Linux works. In practice it's more accurate to say that everything is a stream of bytes.

System Architecture

Determine and configure hardware


DBus (Desktop Bus) is a mechanism for local communication between processes running on the same host, it is an IPC (Inter Process Communication) mechanism. Instead of each process communicating with each other in a mesh network they now communicate with each other through the DBus link which is much cleaner.

The DBus daemon listens for Udev-events (explained later), if data is received, DBus reads the /dev directory and when it recognizes a device it will send signals to specific programs.

For example: Desktop notification if USB device got plugged in.


  p1 --- p2            p1    p2
  |  \/  |             |     |
  |  /\  |      ->     ==DBUS==
  | /  \ |             |     |
  p3 --- p3            p3    p4

Next to hardware related events DBus is also used for software events.

For example: Music player could let other software know what song is playing.

DBus has 2 kinds of daemons:


Virtual file system that the kernel uses to provide information about the system to user space applications like Udev. Sysfs is mounted on /sys. Sysfs is the successor of /dev to manage and view information about devices. The data in sysfs is commonly stored as plain text.

/proc/sys is known as procfs not to be confused with sysfs. Procfs is used to enable or disable features. For example: echo 1 > /proc/sys/kernel/sysrq.


Udev is a mechanism for managing devices. Udev primarily manages devices in the /dev directory. Udev also handles all user space events raised while hardware devices are added into the system or removed from it. Udev gets its information from sysfs.

The kernel will notify Udev when devices are added/removed. It will send data to the netlink-socket (this socket is how the kernel communicates with user space). The Udev-daemon listens to this socket and acts on it.

Udev runs in user space and can change device names or customize kernel modules using Udev-rules located in:

This is the order of execution, first rule counts if overlapping.

udevadm is a user space utility to work with Udev. It monitors Udev events as well as kernel events.

$ udevadm monitor
$ udevadm info --query-all --name=<disk>

So how it all works:

  1. USB device gets plugged.
  2. Kernel sends data to netlink, kernel makes information visible in sysfs.
  3. Udev daemon listens and reads sysfs.
  4. Device gets added to /dev by Udev (Udev rules are read).
  5. Udev sends notification and DBus daemon listens.
  6. DBus reads the /dev directory.
  7. Notification from desktop (file manager, ..).


Also a Virtual file system mounted at /proc. Quite a lot of system utilities are simply calls to files in this directory. For example: lsmod is the same as cat /proc/modules.

Commands to get information about blocks devices:

$ blkid
$ lsblk

Commands to get information about USB devices:

$ lsusb
$ usb-devices


Also known as kernel modules are pieces of code that can be loaded and unloaded into the kernel on demand. Their main goal is to extend the functionality of the kernel without the need of rebooting the system.

To list currently loaded modules:

$ lsmod

To get information about a module:

$ modinfo <module>

To add/remove a module:

$ modprobe -v <module>
$ modprobe -rv <module> # remove

Commands to troubleshoot hardware

$ lspci -xxvvv # x: show hex, v: verbose
$ lscpu
$ lshw
$ cat /proc/meminfo

Kernel parameters is something different than modules, to see loaded parameters run: sysctl -a.

Booting the system

The boot process

  1. BIOS/UEFI: First you have the BIOS (Basic Input/Output System), or UEFI (Unified Extensible Firmware Interface). It's main job is to find to bootloader. UEFI stores all information about startup in a .EFI file in /boot.

  2. POST: (Power On Self Test) activates on-board hardware and checks them.

  3. Partition table: MBR (Master Boot Record) or GPT (Guid Partition Table). The partition table is located in the first 512 bytes of the disk and it describes the layout of your disk. The system needs to know the partition scheme before it can search for the bootloader which comes next. GPT is mostly used with UEFI but can also be used for BIOS because it has a 'protective' MBR.

  4. Bootloader: The bootloader is responsible for loading initrd and the kernel. There are several different bootloaders but the most used one is GRUB (Grand Unified Bootloader). You can specify kernel parameters in the bootloader like rw (read write permission), quiet (don't show boot info on screen when booting), splash (show an image on the screen when booting),.. .

  5. Initrd: The initial ramdisk. In the past you had initrd, now you have its successor initramfs (the process is still called initrd). These things are actually 'pre-kernel-loaders' and they are stripped down linux kernels. Their only job is to detect needed modules for the main kernel and load them. They contain default drivers, they also execute checks and mount the hard drive. As said initramfs is most used today, it is a temporary root file system that is build into the kernel itself (mounted directly into ram) instead of initrd which needed to be mounted separately and then would be replaced by the main kernel. They are stored as .img file in the /boot directly. If you didn't have an initrd the main kernel would need to contain all the modules for every type of hardware, it would be bloated.

  6. Linux: The main kernel, loads the init process. Kernels are stored as compressed images with the name vmlinuz-linux.

  7. Init: Init process (upstart, SysVinit or systemd). This is the first process that will be executed and is responsible for starting/managing all the other processes.

  8. Services: Depending on the runlevel/target services will be executed.

Example of a boot configuration file from /boot/loader/entries/arch.conf (EFI boot):

title Arch Linux
Linux /vmlinuz-linux # compressed Linux kernel
initrd /initramfs-linux.img
initrd /intel-ucode.img
options root=PARTUUID='<puuid of partition>' rw # rw is a kernel parameter

There is a new kid on the block efibootmgr which let you make a boot entry in UEFI without the need of using a bootloader, UEFI will boot to this directly. Real example: efibootmgr -d /dev/mmcblk0 -p 1 -c -L 'Arch Linux EFISTUB' -l /vmlinuz-linux -u 'root=/dev/mmcblk0p3 rw initrd=/intel-ucode.img initrd=/initramfs-linux.img'

Change runlevels/boot targets


Older init system. Still used by some distributions but the most use systemd.


Level Function
0 Halt
1 Single user mode
2 Multiuser, without NFS
3 Full multiuser mode
4 Unused (custom)
5 X11, graphical
6 Reboot

Halt is not the same as poweroff, it does all what poweroff does except sending the ACPI poweroff signal to the motherboard.

To set the default runlevel in SysVinit edit /etc/inittab, this is the SysVinit configuration file and modify the line id::initdefault.

Commands to change the runlevel on the fly:

$ telinit <nr>
$ init <nr>

Depending on the runlevel SysVinit will start or stop scripts on boot after changing runlevel. These scripts are stored in /etc/init.d/.

To see the current runlevel run:

$ runlevel

Depending on the runlevel, the runlevel has symbolically linked scripts from /etc/init.d/ to the directory: /etc/rc.d/. The symbolically linked scripts have a naming convention:

K<nr><name> # kill script
S<nr><name> # start script

Example: K10cups, S25cups in /etc/rc5.d/

To disable/enable scripts use you can specify it on each runlevel:

$ chkconfig <name> on (--level <nr>)
$ chkconfig <name> off (--level <nr>)


The Ubuntu version of SysVinit.

The main configuration file is /etc/init/rc-sysinit.conf. To change the runlevel edit the configuration file and change the line env DEFAULT_RUNLEVEL = .


Instead of runlevels, systemd uses targets. Services are stored in /usr/lib/systemd/system, systemd focuses more on processes than on runlevels, systemd exists out of unit files. A unit file can be a:

A target unit is a collection of services. Systemd can also be used to mount and automatically mount devices.


Level Systemd target
0 poweroff.target
1 rescue.target
2 multi-user.target
3 multi-user.target
4 multi-user.target
5 graphical.target
6 reboot.target

Systemd doest not use runlevels, They are in this table to compare with a target.

To get a list of current available units and their status run:

$ systemctl list-units

To set/get the default runlevel run:

$ systemctl set-default <name>.target
$ systemctl get-default

So what happens if you set a default target. The selected target gets symbolically linked to /usr/lib/systemd/system/default.target.

To change the runlevel on the fly run:

# systemctl isolate <name>.target

To disable/enable services:

# systemctl enable <service>

To analyze systemd run:

$ systemd-analyze
$ systemd-analyze blame # list all loaded services

A unit file looks like this:

Description = <text>
Requires = <>.target
Wants = <>.service
Conflicts = <>.service <>.target
After = <>.target
AllowIsolate = <yes|no>


Displays messages or driver messages use:

$ dmesg -H # human readable
$ dmesg -w # continuously print newest entries
$ dmesg -C # clear


Broadcast message to all terminals. For example:

$ echo 'shutting down in 10 min' | wall


The use of shutdown:

# shutdown -r now # reboot
# shutdown -h +2 # halt the system in 2 minutes
# shutdown -P 13:26 # poweroff
# shutdown -P +3 'message' # sends wall messages

Linux Installation and Package Management

Design hard disk layout


In Linux every driver partition is mounted on the root file system. To see all current mounts run:

$ mount

To mount a device run:

$ mount <device> <mountpoint>

To unmount:

$ umount <device> # to unmount

Another command to view mounts:

$ df -hT

In most distributions the hard disk has a root partition, a boot partition and a swap partition mounted.

Had disk layout

Directory Function
/ The root partition
/boot VFAT file system, +-500MB contains kernel(s) and bootloader info.
/home User data
/srv Service data, databases
/usr Binaries, read only, kernel source files, libraries
/var Variable data, mail, cron, logs


Logical Volume Management, the main purpose of LVM is to abstract your storage by creating virtual partitions, which makes extending/shrinking easier. LVM is much more advanced and flexible than traditional methods of partitioning a disk. A big advantage of using LVM is that you can do most LVM operations on the fly, while the system is running. LVM can expand partitions while they are mounted.

LVM naming conventions:

pv # physical volume
vg # volume group
lv # logical volume

Example of LVM creation

# pvcreate /dev/sdb
# pvcreate /dev/sdc

# vgcreate hdd_vg /dev/sdb # hdd_vg is just a name
# vgextend hdd_vg /dev/sdc

# lvcreate -n part1 -L 3000MiB hdd_vg

# mkfs.ext4 /dev/hdd_vg/part1

So what happens is:

.________.  ._____________.
|   sdb  |  |      sdc    |
|   1G   |  |      2G     |
|________|  |_____________|

| hdd_vg |
|        |

|          hdd_vg         |
|                         |

|          part1          |
|          3G             |

To display information about volumes:

# pvs
# lvscan


Encrypts devices, if decrypted the device becomes a virtual device at /dev/mapper/. To create a LUKS device and use it run:

# cryptsetup luksFormat <device> # deletes everything on device
# cryptsetup luksOpen <device> <name>
# mkfs.<file system> /dev/mapper/<name>
# mount /dev/mapper/<name> <location>
# cryptsetup luksClose <name>


Swap space is used when the amount of physical memory (RAM) is full. The system will 'swap' chunks of memory also known as pages to the swap partition to free up the pages of memory in RAM.

Swap space (partition on hard disk) is 1000x slower than actual RAM. So it is not a replacement.

Create swap:

# mkswap /<swap-partition>
# swapon /<swap-partition>

Swappiness is a kernel parameter which let you tweak the way Linux swaps. It is a number between 0 and 100, the higher the more pages being swapped. Change it by:

# echo <nr> > /proc/sys/vm/swappiness

Or to make it permanent edit /etc/sysctl.conf and add a line for example: vm.swappiness = 60.

Swapping is not a bad thing. For example A process which has been idle for a long time should be swapped to free up RAM for other processes.

Boot managers


This is deprecated use grub2 instead.

The main configuration file is /boot/grub/menu.lst. Sometimes this file is a symbolic link from /etc/grub.conf or something else.

Example of a menu.lst:

hiddenmenu # hides menu
default 0 # which stanza to load first
timeout 0 # time to display the menu

title # begin of a stanza
kernel /path/to/kernel root=UUID<> <kernel flags>
root # end of a stanza

Install grub to disk:

# grub-install <disk>

It is possible to easily back-up your bootloader if you are going to play with it, to back-up run: dd if= of=backup.bootloader bs=512 count=1.


The main configuration file is /etc/default/grub, its templates are located in /etc/grub.d.

Grub exists out of stages.

To install grub2 run:

# grub2-install <disk>

To save and load changed configuration files run:

# grub2-mkconfig -o /boot/grub2/grub.cfg


# update-grub

In case the grub menu does not display at boot hold right shift Pressing e in the grub menu will let you edit boot parameters and c opens the command line.

Manages shared libraries

Programs in Linux commonly use same pieces of code, so instead of writing duplicates, Linux makes it possible to make use of shared libraries. There are 2 kinds of libraries:

To see which libraries a program/command depends on use:

$ ldd <program>

The list of shared libraries are stored in /etc/ld.so.conf.d/ (you can add custom libraries here). The actual libraries are stored in /lib, /usr/lib, ...

To recreate /etc/ld.so.cache (the cached current libraries) after changing something:

# ldconfig

This file speeds up the process of looking for shared libraries and loads them.

After installing software via a package manager, your system will do an ldconfig automatically.

You can change the path variable LD_LIBRARY_PATH to customize your directory of shared libraries. The system will search in this directory first for libraries.

Debian package management

Debian uses the tool apt as its meta package handler to install software, it is a set of tools for managing Debian packages. apt looks at /etc/apt/sources.list (repositories) to install/update packages, and stores the new installed .deb files in /var/cache/apt/archives.


# apt-get update && apt-get upgrade
# apt-cache search <package> # searches for specific package
# apt-get install <package>
# apt-get remove <package>
# apt-get purge <package>
# apt-get autoremove # cleans up unused dependencies
# apt-get check # checks for broken dependencies


A newer Debian way of installing .deb files from repositories, is a TUI package manager, named aptitude.


Is instead of apt a low level tool to install, remove and manage .deb packages.


# dpkg -i <p> # install
# dpkg -r <p> # remove
# dpkg -P <p> # purge
# dpkg -L <p> # list files installed from package
# dpkg -l # list all packages which are currently installed
# dpkg --get-selections # list all packages which are currently installed
# dpkg -S <p> # tells where a package comes from
# dpkg -s # details of package
# dpkg -reconfigure <p> # reconfigure package

RPM and YUM package management


yum is the meta package handler for rpm based distributions. yum looks at the repository files in /etc/yum/repos.d/ and at the configuration file /etc/yum.conf.

A repository file looks like:

baseurl=<link> # fe. file:///repos

Commands with yum:

# yum install, remove, update, groupinstall, localinstall, <p>
# yum search <p>
# yum info <p>
# yum list installed
# yum repolist all
# yum whatprovides <p>


This tool will download the .rpm file of a certain package, adding --resolve will add dependencies to the download.


Does the 'same' as dpkg for Debian but instead with .rpm packages.


# rpm -i <package.rpm> # install
# rpm -v <package.rpm> # verify integrity
# rpm -vK <package.rpm> # return file's checksum
# rpm -e <package.rpm> # remove
# rpm -U <package.rpm> # upgrade to latest version
# rpm -q -a # querry, list all current installed packages
# rpm -qid <package.rpm> # information details
# rpm -h <package.rpm> # show hashes about progress
# rpm -ql # query list
# rpm -qc # query configuration files
# rpm -qd # query documentation


.rpm files are normally compressed with the cpio utility, to revert, use -i: restore the archive; -d: create leading directories:

# rpm2cpio <package.rpm> | cpio -id 

GNU and Unix Commands

The command line


The default shell in most distributions also known as bourne again shell. Every shell comes with Environment variables, examples: $USER and $PWD.

To list all the current variables:

$ env

To get a list of all the current variables, shell builtins, local variables and functions:

$ set

To create a new variable and print your variable:

$ myvariable='hello world'
$ echo $myvariable

The newly created variable will only work in the current shell, to make it also work in 'shell-children' of the current shell run: export myvariable. To destroy the variable run: unset myvariable.

If you press up-arrow you will notice previous entered commands in reverse order this feature is made possible by the .bash_history file.

To get help with commands use:

$ man <cmd>

The man pages are divided into different sections:

  1. Commands
  2. System calls
  3. Library calls
  4. Special files
  5. File formats and conventions
  6. Games
  7. Overview conventions and miscellaneous
  8. System management commands

If you don't know the right syntax of the command you can do a string based search with:

$ apropos <string>
$ man -k <string>


List of filters mostly used after a pipe.

Command Common options Description
cat -n Concatenate
cut -d ' ' -f Cuts off
expand -t Convert tabs to spaces
fmt -w / -t Format text
pr -d / -l Converts to printable text
head -n Prints first lines
tail -n Prints last lines
join - Joins files on common table
paste -s Merges content of files
sort -r / -n Sort files default a-z
uniq -c / -u Prints unique lines
split - Split files on lines
diff/comm - Compare 2 files
od - Transforms to octal form
tr - Transforms text
wc - Count lines, words, chars
nl - Numbers lines
sed - Stream editor

Basic file management

Basic commands

$ cp -r <file>
$ mkdir (-p) <file>
$ mv <file>
$ rm (-r) <file>
$ touch <file>
$ file <file> # display file details


Convert and copy a file:

dd if=<file> of=<file> bs=<size> count=<nr>

If: input file; of: output file; bs: block size

File archives

Compression algorithms

To create tar archives run:

$ tar cvf archive.tar /path/to/dir # to uncompress use -x
$ tar czvf archive.tar.gz /path/to/dir # gzip
$ tar cjvf archive.tar.bz2 /path/to/dir # bzip2

To make an archive of the output of a find command use:

$ find <dir> -name <string> | xargs tar cvf archive.tar

cpio is another tool to create archives (not compression). It works mainly with piped data. Example:

$ ls | cpio -o > newCpioArchive.cpio

Add compression:

$ ls | cpio -o | gzip > newCpioArchive.cpio.gz

File globbing

A shell does not understand regular expressions, but it can use another process: file globbing. Globbing recognizes and expands wild cards.

List of common globs:

#glob       what is does

*           match everything
?           match 1 character
[a,n]       match 1 character 'a' or 'n'
[a-n]       match 1 character 'a,b,c..n'
[!A]        exclude 'A'

Streams, pipes and redirects


All Linux commands have 3 streams open for them:

Note: Using 1 > will overwrite existing data in the destination file. Use >> to append.

Example of a redirect:

$ ls <file> > output.txt 2>&1

Stdout is redirected into output.txt, and stderr to stdout so stderr is also going into output.txt.


tee is a command that sends streams to multiple targets, for example screen and stdout.

Example (outputs to screen and populate list.txt):

$ ls -l | tee list.txt


Takes stdout from previous command as argument for the next command.


$ find . -name '*bash*' | xargs | ls -l

Create, monitor and kill processes

Basic commands

It is possible to customize the ps output. For example: ps -eo user,pid,nice,command.

If you would run ps aux you will see a list of currently running processes but with some extra tables: %CPU is the CPU usage. %MEM is the ratio of process physical memory on the machine. VSZ is the virtual memory size including swapped out memory from shared libraries. RSS resident set size. How much RAM is allocated to that process. TTY controlling terminal associated with the process and STAT is the process status code.

Background processes

To run a process in the background add a & to the end of your command. Type jobs to see a list of all current background processes of the current shell. To bring it back to the foreground use fg 1. If a process is running form terminal you can suspend it by pressing CTRL + Z and use bg 1 to send it to the background.

The numbers of the fg/bg command match the job number obtained by running jobs.

Append nohup to the initial command if you want the background process to keep alive even if the shell, where it is started from, exits.

There is a utility called screen that will let you multiplex your terminal and hide in the background.

Process termination

To terminate a process you can use the kill command, the kill command uses signals to kill/destroy a process. A signal is a value between 1 and 31. Each signal is a way of handling how to kill a process. This handling means the speed, dangerousness, niceness,... For example kill can gently ask a process to stop or it can literally unplug the power of a process. Each value has also a name.

List of common used signals:

Nr Name Function
1 sighup Closes parent shell (gently)
2 sigint Interrupts
9 sigkill Kills (severe)
15 sigterm Terminate (between)
- stop Stops

When a process terminates it will use the exit system call, this will free up the resources the process was using. So when a process is ready to terminate, it will let the kernel know how it is terminating with something called a termination status. Most commonly a status of 0 means the process succeeded termination.

When a parent process dies before a child process. The kernel knows that it is not going to get a wait call, so instead it makes these processes orphans and puts them under the care of init. Init will terminate these processes.

If a child terminates even before the parent has called wait, the kernel turns these child processes into a zombie process, a zombie cannot be killed. The parent will call wait and the zombie process will disappear this is called reaping. If the parent does not call wait, init will adopt the zombie process call wait and remove it.

Modify process execution priorities


nice is a command that will let you run a process with a custom niceness, this value is between -20 and 19 where -20 is plain nasty and 19 very nice. The lower the value the more resources the process will grab for itself without looking at others.


# nice -10 apt-get install screen (nasty)
# nice 10 ... (gentle)

renice will as the name says renice a currently running process based on the pid or username/groupname.


# renice <nr> -p <pid>
# renice <nr> -u <username> (-g <groupname>)
# renice -20  -p 1432

When renicing for a user/group it will renice all commands owned by that user/group.

The nice value is actually user space, there is something else that the kernel uses namely priority, this is the process actual priority. In Linux this priority is a value between 0 and 139 where 0-99 is used for real time processes. The relation between the nice value and priority is: priority=20+niceness which maps 100 - 139.

Search text files using regex


Regex can be used with grep by adding the option -E (extend) also egrep or fgrep can be used to search with regex.


The sed command can also use regex by specifying -r option.

List of regex

regex Function
^ Begins with
$ Ends with
[[:upper:]] Uppercase
* 0 or many
+ 1 or many
? 0 or 1
| Or
[abc] Match a,b or c
[a-n] Match a,b,c..n
. Single character

Devices and FHS

Create partitions and file systems


fdisk is a utility for managing partitions and file systems. Type m will show a menu, p will list your current partitions. To create a new partition hit n, hit t to select a partition type. Finally hit w the write changes to disk.

Other common tools to manage partitions are: parted and gdisk.

After reconfiguring your disk with new partitions, you will still need to format them. Use mkfs with -t to select a file system.


# mkfs -t ext4 /<disk>
# mkfs.<type> /<disk>

List of file systems:

After changing partitions, it is recommended to run partprobe to inform the OS of partition table changes.

Maintain the integrity of file systems


A tool to monitor disk space:

$ df -h # human readable

To see how much disk space is being used by the content of a specific directory use:

$ du
$ du -sh # output human readable sizes

Preventive monitoring

tune2fs is a tool to set maximum number of disk mounts and the maximum interval between system checks.


Run a file system check on an unmounted device:

$ fsck <partition>

To run another automatic, non-interactive repair:

$ e2fsck -p <partition>

Another handy tool is debugfs it will drop you into a shell.

Controlling mounts


To mount a device use:

$ mount <device> <path/to/mount/to>

To unmount run:

$ umount <device>
$ umount <path/where/it/is/mounted/on>

Sometimes it will happen that the mount command will not know the file type, you can specify this with adding the option:

$ mount -t <type>

It can also happen that the drive is read only add -r option. -a option will attempt to mount the device using all types. Devices are commonly mounted on /media or its subdirectories.


The File System Table is located at /etc/fstab and is used to automount devices on boot, to be able to mount devices as non-root and to automate the process of mounting.

An entry in fstab looks like:

# options: defaults, rw, user (able to mount as nonroot), (no) auto, noexec
# dump: Decide when to make a backup, 0 means no backup 
# pass: Used by fsck to decide what order to check partition on (re)boot
# pass: 0 means not checking, 1 is used for root partition and 2 for other
<devices name or UUID> <mount point> <fstype> <options> <dump> <pass>

# real example:
UUID=6305-658c  /   btrfs   rw,relatime,auto  0   0


Crypttab is used to manage mounting of encrypted devices. An entry in /etc/crypttab looks like:

<name> <mount point> (<password_file>) <options>

Manage disk quotas

quota is a utility that can limit disk space and number of inodes for a particular user or group.

You will need to install the quota package.

You will need to add the options usrquota and grpquota to the drive in fstab where you want to use quota's on. Use mount -a to reload fstab.

To enable the new settings, you need to build a table containing the current disk usage, update disk quota files and create a aquota.group and aquota.user file. All this will be done by running:

# quotacheck -avmug # all, verbose, mounts, user, group

Turn quota on by running:

# quotaon -av

Generate a quota report using:

# repquota -av

To set quota's on users:

# edquota -u <username>

Here you will be able to set soft and hard limits. A soft limit means you can go over the limit but you will get a warning, A hard limit means you can not go over it.

Manage file permissions and ownership


Every object within Linux has an owner, a group and a set of rules that determine exactly who hand what gets access. These permissions are represented by 9 characters existing out of dashes and/or letters.

Letter Meaning
r read
w write
x execute

Type ls -l, the first character is the file type, the next 3 are user permissions then 3 group permissions and finally 3 other permissions character.

x           # file type
 xxx        # user (u)
    xxx     # group (g)
       xxx  # other (o)

To change file permission use chmod.


$ chmod o +x <file> # adding the execute permission to the owner
$ chmod g +r <file> # group permission no more read permission

You can also change permissions with octal numbers, where the first number represents the owner, then the group and the last number others:

Number Permission
4 read
2 write
1 execute


$ chmod 644 <file>


Change ownership of a file with chown. Specify a user a colon and a group followed by the file/directory. Add -R to change files recursively in a directory.

$ chown <user>:<group> <file>


umask sets the default permission when a file is created, it subtracts its value from 777 (maximum permission) to set the default permission of a file. So for example umask set to 22 means the default permissions of a newly created file will be 777-22 or 755 or in charset: rwxr-xr-x.

755 in this example will be the default for directories, however with files the execute permission will be stripped off, resulting in a default permission for files of 644.

Suid, sgid and the sticky bit

Suid elevates any user who execute a file to the status of the owner of that file. Sgid does the same but for groups. A sticky bit is used on a directory to protect files from being deleted by other users, even if they would have sufficient rights.

What Letter Number
sbit --t 1000
sgid --s 2000
suid --s 4000


$ chmod 1644 <file>


In Linux you have 2 kinds of links, a hard link and a soft link. A hard link is a reference to the same inode of a file, that is why if the original file is deleted the link still works, the system can not delete the inode because it is still getting referenced. A soft link however points to a file in the file system and not the inode, so the system creates different inodes for the original file and the symbolically linked file. A symbolic link is a reference to the file and not the inode that is why the symbolic link gets broken when the original file is removed.

Create hardlink:

$ ln <source file> <new hardlinked file>

Create softlink:

$ ln -s <source file> <new softlinked file>

Find system files and put them in the correct location


The File system Hierarchy Standard is commonly used by most distributions to define a common layout for their file structure tree.

Key directories in Linux:

directory Function
/bin Binaries, core system utilities, shells
/dev Hardware devices
/etc Text based configuration files
/home/user Your home directory
/lib Code libraries
/usr Application files
/var Variable data, logs, cache
/boot Bootloader files and images
/media Place to mount external devices
/mnt Place to mount external devices
/opt Program installation files
/proc Pseudo file system representing processes
/root Root home directory
/run Runtime data storage
/sbin Admin binaries
/srv Site-specific data (httpd,ftpd)
/sys System hardware info
/tmp Temporary system files

Search tools

find will help you to search files, but there is also locate which is much faster because it relies on an indexed database. To update this database and use locate run:

# updatedb
$ locate <string>

Example of find:

$ find <directory> -name '*<string>*'

You can customize what the locate command finds in /etc/locatedb.conf.

To display the location of a command and source code of its man page(s) run:

$ whereis <command>

To get the location or see if a command is aliased run:

$ which <command>

To identify commands run:

$ type <command>

Shell Scripting and Data Management

The shell environment


When a login shell launches (when you login to your system) it will read the /etc/profile file first, whether or not it exists, then it will read ~/.bash_profile , ~.bash_login or ~/.profile depending of course on their existence.

When a non-login shell launches (a normal shell session, for example: starting a terminal) it will read /etc/bash.bashrc and ~/.bashrc files.

This only applies to systems where your default shell is Bash.

After changing a .bashrc file you need to source it, let Bash know you've changed it. You can do this by:

$ . <path/to/.bashrc>

Another file ~/.bash_logout will control the way a shell session exits.

Aliases and functions

It is possible to alias commands to something else, use:

$ alias vim='nvim'

To unalias use:

$ unalias vim

It is also possible to create shell functions, you can write them and aliases in ~/.bashrc to make them available every time you launch a shell. The syntax of a functions is as followed:

<funcname> () {

  <do something>



The $PATH variable contains directories. It is used to let the shell use commands from the directories without specifying the full path of a command instead you only need to use its name.

For example:

$ ls # instead of /usr/bin/ls

To add a custom directory to you $PATH variable use:

$ export PATH=<directory/to/add>:"$PATH"

Write simple scripts


This structure is mostly at the top of a script and exists out of a # followed by an ! and then the path of the interpreter (Bash, Python or Perl). If you want a more generic shebang you can use env to look up the path to the interpreter automatically. The shebang itself does nothing but the program loader will parse the script to the specific interpreter.

For example:

#! /usr/bin/env python

It is possible to add parameters to the interpreter in the shebang, in case of debugging this is handy.

To make a shell script executable so it can be run by entering the path, full path or ./scriptname use:

$ chmod +x <script>

User input

To perform user input use:

$ read variable1

The user input gets stored in the variable variable1 and can be used:

$ echo "your variable is: $variable1"

Testing values

test is a command to test specific things like if a certain file exists, or if 2 variables are equal to each other, is the file executable, ... Instead of using test you can use [ .. ] or [[ .. ]] (double brackets have more features but are not POSIX standard).

For example to test whether a file exists run:

$ test -e <file>
$ [ -e <file> ]
$ [[ -e <file> ]]

Other test operators:

operator Check if
-x Executable
-eq Equal
-d Directory
-f Regular file
-a And
-ne Not equal
-gt Greater than
-lt Lower than

Testing something will produce an exit code: 0 if expression is true, 1 if expression is false.

Another way of testing is with the normal = and != for comparing 2 values.

Control operators

You can control the sequence of commands with && and ||. Double ampersands only execute the next command if the first one succeeded, double pipes if first command fails.


$ [[ "test" -eq "test" ]] && echo "I am printed!"
$ [[ "test" -eq "NOTTEST" ]] || echo "I_GET_PRINTED_ASWELL"

Scripting parameters

A shell has a bunch of handy parameters:

Parameter Function
$? Exit code of previous command
$$ PID of script
$# Count arguments
$* All the arguments taken as 1 argument
$0 Name of the script
${@} All the arguments, positions kept

Passing arguments/options

A script can be used with parameters/arguments.

To pass an argument to a script just append your argument to the script when executing:

$ ./<script> <argument1> <argument2> <..>

In the script you can use this arguments with the parameters $1 to $9 where $1 is the first positional argument. So in theory you could pass 9 arguments to a script this however is not true, to use more than 9 arguments you can use shift in your script to shift parameters 1 place to the left, this means that for example $6 becomes $5.

Now if you want to pass options you can use getopts which will parse options given to a script.


while getopts "h" option; do

  if [[ $option = h ]]; then

    # show the help




shopt is a bash shell builtin to set and unset various shell options. To enable run:

$ shopt -s <option>
$ shopt -q <option>

To see current settings:

$ shopt


if then elif else
if [[ $1 -gt 6 ]]; then

  echo "Your argument is greater than 6"

elif [[ $1 -eq 6 ]]; then # optional

  echo "Your argument is equal to 6"


  echo "Your argument is lower! than 6"

fi # end if
while until

while [[ $i -lt 5 ]]; do # can also be until

  echo $i
  let i++ 

for i in 1 2 3 4; do

  echo $i


for i in $(seq 20); do

  echo $i



It is possible to mail the super user and other user by piping something in the mail command. Your mail is stored in /var/mail.

$ echo "something" | mail -u "root" -s "subject"

SQL data management

Install mysql, enter its subshell by:

$ mysql -u<user> -p<password>

Basic commands

Manage a database
show databases;

create database <name>;

use <dbname>;
Manage tables
show tables;

create table <table> ...;

insert into <table> ...;
select * from <table> where ..;

select * from <table> order by .. join <table> on ..;

User Interfaces and Desktops

Install and configure X11

X is the engine that makes it possible to interact with a GUI. The main configuration directory is /etc/X11 where xorg.conf is located (it can happen that you need to create this). A X configuration file looks like:

Section "<device>" # for example "Monitor"
  Identifier "<>" # for example "eDP-1"
  Option "<>"

xorg.conf can be split in different configuration files they are stored in /etc/X11/xorg.conf.d.

If you want to view your current X settings run:

$ xdpyinfo | less

To get interactive information for any open window run:

$ xwinfo # click the window

A utility for server access with X, it allows others to connect:

$ xhost

There is a variable that defines your display:

$ echo $DISPLAY

To manage your desktop environment use:

$ lxappearance

Setup a display manager


Lightdm is a commonly used display manager, not distribution related. A display manager manages your login to a window manager with authentication. Lightdm's configuration files are stored in /etc/lightdm/lightdm.conf and /etc/lightdm/lightdm.conf.d.

A couple of options you could set in the configuration file:


# a real example:

greeter-session = lightdm-gtk-greeter
session-wrapper = /etc/lightdm/Xsession

But there is a handy utility that will set this all for you in a nice GUI:

# lightdm-gtk-greeter-settings


xhost is a utility for server access to X sessions. Xhost is used to allow and disallow users to connect.

To let everyone connect to you:

$ xhost + # use minus to close access for everyone

To let a specific user connect:

$ xhost + <ip>


Ubuntu is the best distribution for people with special needs. brltty is a package that will display braille instead of text. Most accessibility tools can be controlled through what Ubuntu calls the Universal Access Panel.

Administrative Tasks

Manage user and group accounts and related system files.


A user is stored as a profile in /etc/passwd, this profile is in fact an entry in this file. It contains: the username, uid, gid, home directory and the default shell.

An entry in /etc/passwd looks like:


The x means the password of the user is encrypted and is stored in /etc/shadow, in old configurations the password was stored instead of the 'x'.

/etc/shadow is the file were encrypted user passwords are stored, it has a permissions set of 000 and an entry looks like:

<username>:<password>:<time_since_password_changed>:<min_password_lifetime>:<max_password_lifetime>:<warning_days>:<inactive days>:<account status>

Instead of a password there can be an ! this means this account is not accessible.

Not all fields need to contain data, the data field account status means if the account is enabled or disabled.

To add a user to the system use: useradd (on some distributions this is adduser).


# useradd -d /home/dirk -m -s /bin/bash -g wheel -G wheel -u 1500 -c 'this is dirk'

-d: home directory; -m: create home dir; -s: default shell; -g: primary group; -G: additional groups; -u: uid; -c; description

To modify data from a user use usermod:

# usermod -L <user> # lock
# usermod -U <user> # unlock
# usermod -s <user> # set shell
# usermod -aG <user> # add to group

To remove a user:

# userdel -r <user> # remove user and home dir

Set a password for a user:

# passwd <user>

If you want that users update their passwords regularly use chage example:

$ chage -m 5 -M30 <user> # min 5 days, max 30

To see the current settings of a user's chage:

$ chage --list <user>


Groups are stored in /etc/groups, an entry looks like:

<groupname>:<password (mostly empty):<gid>:<members>

/etc/gshadow is a file containing additional data of groups. Entries look like:

<groupname>:<password>:<gid>:<members> # group passwords are barely used

To add a group to the system use:

# groupadd -g <gid> <name> # -g is optional

To delete a group use:

# groupdel <groupname>

To modify a group, for example its name use:

# groupmod -n <newname> <oldname>


getent is a utility to get entries from name service switch libraries, for example: passwd, group, host, aliases and networks.

Scheduling tasks


Cron (Command Run ON) is a daemon to execute scheduled tasks. The main configuration file is /etc/crontab but there are many cron configuration directories located in /etc all with time-based purpose. In these directories you can place executable scripts and then they will be executed depending on the name of the directories. The directories are:

crontab is a utility to store scheduled tasks for individual users. /etc/crontab is used for system jobs. For individual use, the data is stored in /var/spool/cron/ but you should use crontab -e to edit tasks. The crontab schedule format looks like this:

#  .---------------- minute (0 - 59)
#  |  .------------- hour (0 - 23)
#  |  |  .---------- day of month (1 - 31)
#  |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
#  |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
#  |  |  |  |  |
   *  *  *  *  * user-name  command-to-be-executed

Special crontab entries are:

0,1,15,45 * * * ... #  1, 15, 45 minutes passed every hour.
*/15 * * * ... # every 15 minutes
* * * * 1-5 ... # 1-5 weekdays
@daily <command>
@weekly <command>
@reboot <command>

A '*' means anything so if you would have an entry with all '*' it would execute every minute.


Anacron main configuration file is /etc/anacrontab and its function is to run missed jobs caused by computer shutdown, errors, system failure, ... Timestamps are stored in /var/spool/anacron. Anacron will also execute periodically with a frequency specified in days. Anacron does not assume that your computer is running 24/7. Entries are specified in the following format:

# .------ number of days you want to leave between executions
# |  .---- how long in minutes anacron should wait after booting to execute
# |  |
  *  *  <command>


At schedules an event for a single execution at a specific time. To setup an event run:

$ at <date> # followed by the command and CTRL + D
$ at 17:00

To see scheduled events:

$ atq

To delete scheduled events:

$ atrm

Access control

If you use cron or at you can specify access control by adding following files in /etc/:

cron.allow || cron.deny
at.allow || at.deny

If a .allow file is created only users who are in this file can manage tasks, so an empty .allow file means no on is able to schedule jobs. A .deny file means users who are in this file can not create scheduled tasks, the .allow has the upper hand if they overlap.

Localization and internationalization

Setting a date can be done by:

# date --set='<date>'
# timedatectl set-time 'yyyy-MM-dd hh:mm:ss' # systemd way


Information about timezones is located in /usr/share/zoneinfo. To set your timezone you can use one of the following:

# ln -sf /usr/share/zoneinfo/<Region>/<place> /etc/localtime
# timedatectl set-timezone
# tzselect # TUI
# export TZ=<Region>/<place>


Your local language settings. To list your current locale settings:

$ locale

There are quite a few variables listed here. By default they all follow the LANG variable. To get a list of other locales are available to you run:

# locale -a

To add more options to this list uncomment lines in /etc/locale.gen and run locale-gen.

The variable LC_ALL takes precedence over other LC_... settings. LANG_C is the default encoding, it is used in scripts to avoid conflicts with languages/translations.

Character encoding

Character encoding can play a large roll in Localization, the most used encoding is UTF-8 (Unicode Translation Format 8 bits) includes encoding for most languages. There are other options swell:

To convert a text file between character encoding use:

$ iconv -f ascii -t utf8 <oldfile> > <newfile>

Essential System Services

Maintain system time

The hardware clock

The hardware clock (also known as BIOS clock) will respond with a specific time also known as real time clock (RTC), and the software system will respond with another time known as UTC (local time). The hardware clock will measure time in complete isolation from the rest of the world, you can see the current hardware time with:

# hwclock -r

To update the system clock so it will be set to the current hardware time:

# hwclock --hctosys # hardware clock to system


# hwclock --systohc # system to hardware clock

To match your hardware clock to your local timezone use:

# hwclock --localtime
# hwclock --utc

You can also set your hardware clock manually:

# hwclock --set --date='<date>'

Network time protocol

In order to have a synced clock you should use a NTP service. Install the package ntp if not already installed. The main configuration file is located at /etc/ntp.conf. To list existing peers from the NTP service run:

# ntpq -p

It is possible to log statistics by adding a line to /etc/ntp.conf:

statsdir /var/log/ntpstats/

To be able to be in sync you need a source. Add server pools to the configuration file from www.pool.ntp.org (more than 1 is advised).

For example:

server 2.be.pool.ntp.org

It is also possible to add them by command:

# ntpdate <server-pool.ntp.org>

Now enable and start the ntpd daemon. Instead of a server abroad you can set your computer as NTP server by specifying this line in /etc/ntp.conf:

broadcast <ip>

System logging


Rsyslogd is one of many syslog logging management protocols that allows you to control the creation and movement of log data. In systemd syslogd is replaced by journald. The main configuration for rsyslog is /etc/rsyslog.conf and the directories for customized files is /etc/rsyslog.d. A file in the /etc/rsyslog.d directory mostly the default file contains the most information, this file has a specific format; a facility + the priority and then the action.


# when there is an error in the facility mail, log data to specific file
mail.err /var/log/mail.err 

List of facilities, priorities and actions:

Facility Priority Action
auth emerge log to file
cron alert user
daemon crit pipe
ftp err,error remote host
kern warnings
mail notice
/user (default) /info (default)


By default systemd based distributions use journald binary logging system. The journal itself is written to /var/log/journal/. You can use journalctl to view and manage logs. The main configuration file is /etc/systemd/journald.conf. Some options of journalctl.

Options Function
-e 1000 most recent entries
-f Watch for new entries
-r Reversed
-k Kernel logs
-b Boot logs
-u Log specific unit, add the unit


This tool is for logging messages from a script. Watch journalctl -f and do:

$ logger <text>

You can also control where a message is logged and the priority.


$ logger -p lpr.crit <text>


Will move large log files, or archive them (this is called rotation). The main configuration file is /etc/logrotate.conf. In this file you can specify how long backlogs need to be saved, if old log files should be compressed, ... A package can have its own logrotate rules in /etc/logrotate.d/.

Mail transfer agent basics

There are a couple of MTA's you could use sendmail this is the most popular one, the oldest en the hardest to configure. qmail was designed to replace sendmail but is not GPL and deprecated. exim has a lot features and supports ACL. The default on most distributions is postfix it has a clear configuration and is pretty easy to use. You can use it by installing: postfix and mailutils (can be mailx depending on distribution). The main configuration file for postfix is /etc/postfix/main.cf. To send a mail to a local address use:

$ sendmail -t <user> # enter
$ <message> # enter

Press: CTRL + D

Or you could do:

$ echo "<message>" | mail -t <user>

Mails are stored in /var/spool/mail/.

Type mail to list the mails the account has received. You can create aliases in /etc/aliases to alias a recipient. To update the new aliases run:

$ newaliases

If you want to forward mails to other users, add a .forward file with the users you want to forward to in your home directory.

$ touch ~/.forward

To view pending mails run:

$ mailq

Manage printers and printing


Common Unix Prinintg System is the most used printer manager for Linux. You can manage printers via the web-panel: localhost:631 or the main configuration file: /etc/cups/cupsd.conf. Printer queues are stored in /var/spool/cups. Another configuration file is /etc/cups/printers.conf it changes a lot and is used to debug.

you can block new jobs from reaching a printer with cupsreject and unblock with cupsaccept.


Before CUPS there was the Linux Prinintg Daemon, nowadays LPD is aware of cups and will know about installed printers. It is still used to print a file, to use run:

$ lp <file> 
$ echo "<text>" | lp

If you got more than one printer connected run:

$ lp -d <printername> <file>

If you are not sure what your printer is called or see printing jobs run:

$ lpq

To remove a job:

$ lprm <jobnr>
$ lp rm - # remove all jobs

Networking Fundamentals

Fundamentals of Internet Protocol

Broadly speaking modern networks rely on three conventions to solve the problem of addressing: transmission protocols (TCP, UDP and ICMP), network addressing (IPv4 and IPv6) and service ports.

Transmission protocols.

TCP (Transmission Control Protocol) carries most web, E-mail and FTP communication. It verifies for packet completeness. UDP (User Datagram Protocol) is a good choice when verification isn't needed. ICMP (Internet Control Message Protocol) is most used for quick and dirty exchanges like ping.


An IPv4 address is made up of 4 sets of 8 bits split,the sets are split by dots. The octets in the left are for the network, octets in the right are for individual nodes. To know where the individual nodes begin you could use a netmask. A netmask of means 24 bits for the network and 8 bits for individual use. You could also use CIDR (Classless Inter Domain Routing) for defining network/nodes. A CIDR notation of /24 means 24 bits are reserved for the network.

# ip-address # base 10

# same address but in bits
1100 0000 . 1010 1000 . 0000 0001 . 0000 1010 # base 2

You are only able to view other hosts based on netmask/CIDR unless you have a gateway.

There is a difference between public and private networks. The private ranges are:


All IPv4 addresses fall into one of 3 classes:

Class Range
A 1 .. 127
B 128 .. 191
C 192 .. 223


An IPv6 address is made up of 8 sets of 16 bits, hexadecimal notation (base 16). The sets are separated by colons. If a set contains all zero's you may write double colons instead.



Service ports

List of well known service ports:

Port Function TCP UDP
20 FTP X
21 FTP-login X
22 SSH X X
23 Telnet X
53 DNS X X
110 POP3 X X
119 NNTP X
139 netBIOS X X
143 IMAP X X
161 SNMP X X
389 LDAP X X
514 Remote shell X
995 POP3S X X

To view open ports you can use telnet:

$ telnet 80

You can view an up-to-date list of the well known and ICANN (Internet Corporation for Assigned Numbers and Names) registered ports in /etc/services.

To lookup a DNS you can use host or dig.

To see a path and hops to a certain network you can use tracepath:

$ tracepath

Basic network configuration

Basic network files and tools

To see information about your current network run:

$ ifconfig
$ ip a

To enable an interface use:

$ ifup <interface> # ifdown to disable
$ ip link set dev <interface> up # or down

To add an interface manually:

$ ip a add <ip> /<netmaks><CIDR> dev <interface>

It is more common however to ask an address from a DHCP (Dynamic Host Configuration Protocol) server. This is normally done automatically but to do it manually run:

$ dhclient <interface>

To view your route to the Internet use:

$ route

To add a route with a gateway:

$ route add default gw <ip>
$ ip route add default via <ip>

Some other configuration files are:

Red Hat network files

Main configuration file: /etc/sysconfig/network-scripts/ifcfg. looks like:

ONBOOT = <yes|no>
IPADDR = <ip>
NETMASK = <netmask>
DEVICE = <interface>
BOOTPROTO = <> # boot protocol fe: dhcp

/etc/sysconfig/network looks like this:

NETWORKING = <yes|no>
NETWORKING_IPV6 = <yes|no>

Debian network files

/etc/network/interfaces looks like:

auto <interface>
iface <interface> inet static
  address <ip>
  netmask <mask>
  gateway <gateway>

Basic network troubleshooting

First step in troubleshooting the network is look at ifconfig / ip next dmesg, ping and traceroute. You could also look at netstat or ss:

$ netstat -tulpna 
$ netstat -rn
$ ss -ltnp

You can also troubleshoot with netcat:

$ nc -z -v <ip> <port>

To restart your network run:

# service networking restart # upstart / sysVinit
# systemctl restart NetworkManager.service # systemd

Always check the routing table, check the gateway if it is to your router!

Configure client side DNS

Nameservers are often, if your machine is a DHCP client, taken from the DHCP server and stored in /etc/resolv.conf. To add DNS settings on a Debian machine add these lines in /etc/network/interfaces:

dns-nameservers <ip> <ip> ...
dns-search <site>

To add IPv6 support add this line to /etc/hosts:

::1 localhost6

If you want private address translation edit /etc/hosts and add:

<ip> <name>

Now it is possible for example: to ssh with the name instead of the address.

The private address translation needs to be in your reach so check gateway and edit /etc/resolv.conf if needed.


Perform security administration tasks


PAM (Pluggable Authentication Modules) is a framework that assists application in performing authentication related activities. It simplifies authentication management processes. Authentication is the process of determining that a subject is who he says he is. PAM nowadays does more than just this, it also manages resources, restrict access time, enforce good password selection and so on.

Benefits of using PAM are: simplified centralized authentication management for the administrator, so you don't need to write your own authentication routines. A second benefit is flexible authentication.

How PAM works:

Typically if a single PAM module returns a failure status, access to the application is denied, however this can depend on the configuration file.

The location of the configuration files:

These configuration file are made up of a context, control flag, module and arguments:


Context C-flag Module Argument
auth include password=auth invoke
account require pam.access.so prepare

In this example users are forced to create passwords whose minimum length is 12 with minimum 1 uppercase, minimum 1 lowercase, minimum 2 digits and minimum 1 other. To make this happen edit /etc/pam.d/passwd to:

password required pam_cracklib.so minlen=12 lcredit=1 ucredit=1 dcredit=2 ocredit=1


Access Control Lists are used to allow/disallow individual users to read/edit/remove files. ACL's have the upper hand over the standard Linux permissions. A file added with ACL permissions gets a + at the end of its standard Linux permissions wich means that it has extended permissions (ACL). To view ACL info use:

$ getfacl <file>

ACL's are often used on directories because with ACL you can set default permissions and ACL permissions for individual users. To set ACL permissions to a file run:

$ setfacl -m -u:<user>:<r|w|x> <file>

Delete ACL:

$ setfacl -b <file>

To make a default ACL on a directory, so all files in it get the same ACL rights:

$ setfacl -m d:u:<user>:<r|w|x> <directory> # set it for a  user
$ setfacl -m d:u::<r|w|x> <directory> # set it for everyone else


Secure Enchanted Linux allows process sand boxing which means processes cannot access other processes or their files unless special permissions are granted. SELinux limits application's ability. It manages by ports, applications or locations. SELinux has different states which it can resides in:

SELinux has the upper hand over ACL, which has the upper hand over the standard Linux permissions.

If SElinux is installed, every file in the system is labeled with contexts from SELinux. The default mode is 'targeted' which means files, directories, processes and ports are labeled according to the access required to access them. To show the context of files:

$ ls -Z
$ ls -Zaux

A real example is for example: label httpd_t has access to httpd_context_t but httpd_t has no access to user_home

To change a label/context run:

$ chcon -R -t <type> <file>
$ restorecon -vR <directory> # restores labels in a directory 

To manage processes and ports better SELinux uses booleans for certain processes. To list them:

$ getsebool -a

To turn a boolean on/off:

$ setsebool 

To list the settings of SELinux on ports run:

$ semanage port -l


On a RedHat machine sudo is not installed instead use:

$ su -c <command>

On a Debian machine there is not a password set for the su user.

You can edit the way sudo works through /etc/sudoers but it's better to use visudo to edit the file.


To see all users logged in use:


Or to see a list of logins use:

$ last -d
$ who

Monitor files and ports

You can list all processes and their users that have a specific file open with:

$ lsof <file>

Adding +D option will list all open files within a specified directory. Using -u option will show open files by a user:

$ lsof -u <user>

Using -i option will list all open network connections:

$ lsof -i

Much of the same functionality of lsof can be found in fuser. To kill all processes accessing a file use:

$ fuser -km <file>

To view any unauthorized access involving port 80 TCP run:

$ fuser -v -n tcp 80

To scan the network use:

$ nmap -su <ip> # udp scan
$ nmap -p <port> <ip> scan a specific port
$ nmap -A # heavy scan the entire internet


chage is a utility to change a user's password expire information.

$ chage -l <user> # shows information
$ chage <user> # a tui

Finding SUID and SGID

To find files with SUID or SGID permission:

$ find / -type f -perm -u+s
$ find / -type f -perm -u+s,g+s # dash means all
$ find / -type f -perm u+s,g+s # only one need to match
$ find / -type f -perm -u+s -ls

To find owner less files:

$ find / -xdev \(-nouser -o nogroup\) -print


You can set limits on the system resources available to specified users or even groups. This can be done with:

$ ulimit

However this is only in the shell at the moment, to make permanent limits edit /etc/security/limits.conf. To view a list of all possible limits run:

$ ulimit -a

For example to limit the file size of a file a user can create:

$ ulimit -f <size>

The configuration file looks like:

# domain    type    item    value
  <user>    hard    
  @group    soft

A soft limit means you can go over it but you will get a warning, you can not go over a hard limit.

Setup host security


If you make a file /etc/nologin (readable) the system recognizes this file and will disable all logins form the system except root. If you add text to the file it will be displayed when a user tries to login.

Super server

A super server is a type of daemon that starts/stops other servers when needed. Normally it gets checked first by TCP-wrappers. Inetd (Internet Services Daemon) is a known super server. It listens on specific ports used by internet services and when a request comes in (on one of the ports), it will launch the appropriate service to handle the connection. The main configuration file is /etc/inetd.conf. Nowadays inetd is replaced on most systems with xinetd (Improved Inted), its main configuration directory is /etc/xinetd.d. After editing xinetd you need to restart the service.

A configuration file in /etc/xinetd.d looks like:

service <name>
  disable = <bool>
  type = <string>
  protocol = <>
  user = <>
  wait = <>
  socket_type = <>

To enable a service change the value of disable to no. After that, anyone who logs in is able to use that service. To differentiate users based on IP use TCP-wrappers.


TCP-wrappers present 2 files: /etc/host.allow and /etc/host.deny. The file host.allow has the upper hand in conflicts between the files. With these wrappers you can allow/deny access to services for networks. A TCP-wrapper looks like:

<service> <ip>

# real example host.allow

ftpd 192.168.1. # everyone in this network is allowed to use ftpd

Securing data with encryption


This is a cryptographic network protocol for opening network services securely over an unsecured network. SSH exists out of keys; a private key and a public key. To encrypt data you will use the public key and then you will send the encrypted data to a recipient and he/she will decrypt it with the private key. This is called encrypting. When data is encrypted with the private key, it is used for signing purpose. Anyone with the public key can verify that the data comes from the specific private key.

To generate keys run:

$ ssh-keygen -t rsa

This will output 2 keys in ~/.ssh/; a private key=id_rsa and a public key=id_rsa.pub.

SSH can use different encryption algorithms but the most used one for encrypting is RSA, for signing DSA.

Storing a public key requires a permission being set on if of 600.

To send your public key to a server, so you don't need to enter it's password every time:

$ ssh-copy-id -i <id_rsa.pub> <server>

Now it's best to disable password authentication and enable public key authentication only on the server in the file: /etc/ssh/sshd_config and change the lines: PubKeyAuthentication and PasswordAuthentication.

It is possible to add an extra password, so every time you log in you need to enter an ssh password. This can be evaded by using ssh-agent. This utility will keep the password, after entering once, in the memory. Only child processes can make use of this. To start a session by:

$ ssh-agent bash

To add the key:

$ ssh-add ~/.ssh/id_rsa

If you would like to tunnel X over ssh enable this in /etc/ssh/sshd_config:

X11Forwarding yes

And on the client in /etc/ssh/ssh_config:

ForwardX11 yes

To tunnel localhost to another IP use:

$ ssh <server> -N -L <port>:<ip>:<port>
$ ssh user@server.com -N -L 2000:

localhost:2000 is tunneled to, localhost:2000 will display the same content as


Instead of doing network related encryption GnuPG (Gnu Privacy Guard) is used for file encryption. To generate keys:

$ gpg --gen.key

It is possible that your system does not have enough entropy/noise to generate keys. You need to do 'stuff' on your system; open files, browse, edit, ... so more entropy can be generated and the keys can be forged. To generate a lot noise: find / -type f | xargs grep > /dev/null.

To see a list of keys:

$ gpg --list-keys

To export the key:

$ gpg --export <id> <gpg.pub>

To import the key (on another server)

$ gpg --import <gpg.pub>

To encrypt a file with a certain public key:

$ gpg --out <encrypted_filename> --recipient <id> --encrypt <file>

To decrypt:

$ gpg --out <filename> --decrypt <file>


Init system cheatsheet

Command SystemVinit Systemd Upstart
Start/stop a service service name stop/start systemctl start/stop name service name start/stop
Enable a service at startup chkconfig name on systemctl enable name update-rc.d name enable
and specify runlevel --level 2,3 on (Add the unit file to the runlevel target) enable 2,3
Set default runlevel (Edit /etc/inittab) systemctl set-default name (Edit /etc/init/rc-sysinit.conf)
Get default runlevel cat /etc/initab systemctl get-default cat /etc/init/rc-sysinit.conf
Set current runlevel init rl / telinit rl systemctl isolate target init rl / telinit rl
Print list of services chkconfig --list / service --status-all systemctl list-unit-files --type=service initctl list service --status-all
Print list of services per rl sysv-rc-conf ls /etc/systemd/system/*.wants
Reload chkconfig name --add systemctl daemon-reload