Wednesday, April 30, 2008

Find out Linux Disk utilization

You can use iostat to find out disk utilization and for monitoring system input/output device loading by observing the time the physical disks are active in relation to their average transfer rates.

iostat -d -x interval count

  • -d : Display the device utilization report (d == disk)
  • -x : Display extended statistics including disk utilization
  • interval : It is time period in seconds between two samples . iostat 2 will give data at each 2 seconds interval.
  • count : It is the number of times the data is needed . iostat 2 5 will give data at 2 seconds interval 5 times

Display 3 reports of extended statistics at 5 second intervals for disk:
$iostat -d -x 5 3

Linux 2.6.18-53.1.4.el5 (moon.nixcraft.in)   12/17/2007

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 1.10 39.82 3.41 13.59 309.50 427.48 43.36 0.17 10.03 1.03 1.75
sdb 0.20 18.32 1.15 6.08 117.36 195.25 43.22 0.51 71.14 1.26 0.91

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 108.40 1.40 64.40 49.60 1382.40 21.76 0.04 0.67 0.44 2.92
sdb 0.00 37.80 0.00 245.20 0.00 2254.40 9.19 28.91 108.49 1.08 26.36

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 97.01 1.00 57.29 39.92 1234.33 21.86 0.03 0.58 0.50 2.89
sdb 0.00 38.32 0.00 288.42 0.00 2623.55 9.10 32.97 122.30 1.15 33.27

Where,

  • rrqm/s : The number of read requests merged per second that were queued to the hard disk
  • wrqm/s : The number of write requests merged per second that were queued to the hard disk
  • r/s : The number of read requests per second
  • w/s : The number of write requests per second
  • rsec/s : The number of sectors read from the hard disk per second
  • wsec/s : The number of sectors written to the hard disk per second
  • avgrq-sz : The average size (in sectors) of the requests that were issued to the device.
  • avgqu-sz : The average queue length of the requests that were issued to the device
  • await : The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
  • svctm : The average service time (in milliseconds) for I/O requests that were issued to the device
  • %util : Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%.

To interpret the output result for optimization, first note down following values from the iostat output:

  1. The average service time (svctm)
  2. Percentage of CPU time during which I/O requests were issued (%util)
  3. See if a hard disk reports consistently high reads/writes (r/s and w/s)

If any one of these are high, take one of the following action:

  • Get high speed disk and controller for file system (for example move from SATA I to SAS 15k disk)
  • Tune software or application or kernel or file system for better disk utilization
  • Use RAID array to spread the file system

For example, from about iostat report it appears that /dev/sdb under load.

Configure Linux to track and log failed login attempt recoreds

You can use faillog command to display faillog records or to .set login failure limits. Without arguments it display only list of user faillog records who have ever had a login failure.

PAM Settings: Under RHEL/CentOS 5.x, you need to modify /etc/pam.d/system-auth. You need to configure PAM module pam_tally.so. Otherwise faillog will never display failed login attempts.

PAM Configuration to recored failed login attempts: pam_tally.so maintains a count of attempted accesses, can reset count on success, can deny access if too many attempts fail.

# vi /etc/pam.d/system-auth # Append
auth required pam_tally.so no_magic_root
account required pam_tally.so deny=3 no_magic_root lock_time=180

Where,

  • deny=3 : Deny access if tally for this user exceeds 3 times.
  • lock_time=180 : Always deny for 180 seconds after failed attempt. There is also unlock_time=n option. It allow access after n seconds after failed attempt. If this option is used the user will be locked out for the specified amount of time after he exceeded his maximum allowed attempts. Otherwise the account is locked until the lock is removed by a manual intervention of the system administrator.
  • magic_root : If the module is invoked by a user with uid=0 the counter is not incremented. The sys-admin should use this for user launched services, like su, otherwise this argument should be omitted.
  • no_magic_root : Avoid root account locking, if the module is invoked by a user with uid=0

Display all failed login attempts for a user
# faillog -u didi

Display faillog records for all users
# faillog -a

-r can reset the counters of login failures or one record if used with the -u USERNAME option:

# faillog -r
# faillog -r -u vivek <-- only reset counter for vivek user

On large Linux login server, it might be useful to clear all counts every midnight or week from a cron job
# crontab -e
@weekly /usr/bin/faillog -r

Clonezilla Ghost - The Linux answer to Norton Ghost

Norton Ghost is slow. There is Norton Ghost Corporate Edition with multicasting and fast cloning system.

Clonezilla is a free, open source, and multitasking. With DRBL (Diskless Remote Boot in Linux [DRBL] provides a diskless or systemless environment for client machines) and network boot enabled client computers, the only thing you have to prepare is a Clonezilla server. The best thing is ~ you do not even have to prepare a bootable CD or floppy with Partition Image for every client computer.

Download Clonezilla
A simple howto on setting up Clonezilla/DRBL on Ubuntu Linux
A simple documentation to quickly setup up Clonezilla on a PXE server

Copy hard disk or partition image to another system using a network and netcat

netcat is a swiss army knife in networking. It is designed to be a reliable "back-end" tool that can be used directly or easily driven by other programs/scripts, as well as a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.

One handy and trusted use is to migrating data between two server hard drives. You can also use ssh for the same purpose, but encryption adds its own overheads.

A sample setup:

-----------------------
HostA // 192.168.1.1
------------------------
sda
NETWORK
sdb
------------------------
HostB // 192.168.1.2
-------------------------

To copy /dev/sda on HostA to /dev/sdb on HostB, first login as root.

On hostB (receiving end ~ write image mode), open port:
# netcat -p 2222 -l |bzip2 -d | dd of=/dev/sdb

Where,

  • -p 2222 : the source port, subject to privilege restrictions and availability. Make sure port 2222 is not used by another process.
  • -l : listen for an incoming connection rather than initiate a connection to a remote host.
  • bzip2 -d : Compresses image using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. This will speed up network transfer ( -d : force decompression mode)
  • dd of=/dev/sda : The hard disk. You can also specify partition such as /dev/sda1

On hostA (send data over a network ~ read image mode), login as root:

# bzip2 -c /dev/sda | netcat hostA 2222 # OR use IP:
# bzip2 -c /dev/sda | netcat 192.168.1.1 2222

Apparently, this process takes its own time.

A note about latest netcat 1.84-10 and above: Above syntax will generate an error. It is an error to use -l in conjunction with the -p, -s, or -z. Additionally, any timeouts specified with -w are ignored. So use nc:

On hostA:
# nc -l 2222 > /dev/sdb
On hostB:
# nc hostA 2222< /dev/sda # Or
# nc 192.168.1.1 2222< /dev/sda

Using hostB, connect to the listening nc process at 2222 (hostA), feeding it the file (/dev/sda) which is to be transferred.
On hostA:
# nc -l 2222 | bzip2 -d > /dev/sdb
On hostB:
# bzip2 -c /dev/sda | nc 192.168.1.1 2222

Restrict Users to SCP and SFTP and Block SSH Shell Access with rssh

FTP is insecure protocol but often needed. OpenSSH Server can transfer file using SCP and SFTP (secure ftp) but it also grants ssh shell access, which is not desirable. Here is how sftp works:
SCP/SFTP -> sshd -> Call sftpd subsystem -> Requires a shell -> User login

rssh is a restricted shell for providing limited access to a host via ssh, it allows following operations only:

  • scp - Secure file copy
  • sftp - Secure FTP
  • cvs - Easily retrieve old versions to see exactly which change caused the bug
  • rsync - Backup and sync file system
  • rdist - Backup / RDist program maintains identical copies of files on multiple hosts.
  • Default configuration file is located at /etc/rssh.conf (FreeBSD - /usr/local/etc/rssh.conf)
  • Default rssh binary location /usr/bin/rssh (FreeBSD - /usr/local/bin/rssh)
  • Default port none - ( openssh 22 port used - rssh is shell with security features)
All you have to do is to set a user's shell to /usr/bin/rssh. For example:

Login as the root user, create a new user
# useradd -m -d /home/didi -s /usr/bin/rssh didi

Change user's shell:
# usermod -s /usr/bin/rssh didi # or
# chsh -s /usr/bin/rssh didivivek

Now try login via ssh or sftp using username didi:
$ sftp didi@my.backup.server.com # or
$ ssh didi@my.backup.server.com

This account is restricted by rssh.
This user is locked out.
Connection to my.backup.server.com closed.

By default rssh locks down everything including any sort of access. Now grant access to sftp and scp for all users by appending:
# vi /etc/rssh.conf
allowscp
allowsftp

rssh reads configuration on fly (there is no rssh daemon). Now user can run scp and sftp, without shell access:
$ scp /path/to/file didi@my.backup.server.com:/. # or
$ sftp didi@my.backup.server.com:/.

Connecting to lmy.backup.server.com...
didi@my.backup.server.com's password:
sftp> pwd
Remote working directory: /home/didi
sftp>

Understanding command configuration options

  • allowscp : Tells the shell that scp is allowed.
  • allowsftp : Tells the shell that sftp is allowed.
  • allowcvs : Tells the shell that cvs is allowed.
  • allowrdist : Tells the shell that rdist is allowed.
  • allowrsync : Tells the shell that rsync is allowed.

Tip: Create a group for rssh users, and limit executable access to the binaries to users in that group to improve security. Use standard file permissions carefully and appropriately.

Chroot jail to lock users to their home only

rssh's chrootpath option is used to set the root of the chroot jail. A chroot affects only the current process and its children. If the default home directory is /home/didi, normal user can access files in /etc, /sbin or /bin. This allows an attacker to install sneaky things via your web server in /tmp.

=> Chroot directory: /users.
Tip: If possible mount /users filesystem with the noexec/nosuid option to improve security.

=> Required directories in jail:

  • /users/dev - Device file
  • /users/etc - Configuration file such as passwd
  • /users/lib - Shared libs
  • /users/usr - rssh and other binaries
  • /users/bin - Copy default shell such as /bin/csh or /bin/bash

=> Required files in jail at /users (default for RHEL / CentOS / Debian Linux):

  • /etc/ld.so.cache
  • /etc/ld.so.cache.d/*
  • /etc/ld.so.conf
  • /etc/nsswitch.conf
  • /etc/passwd
  • /etc/group
  • /etc/hosts
  • /etc/resolv.conf
  • /usr/bin/scp
  • /usr/bin/rssh
  • /usr/bin/sftp
  • /usr/libexec/openssh/sftp-server OR /usr/lib/openssh/sftp-server
  • /usr/libexec/rssh_chroot_helper OR /usr/lib/rssh/rssh_chroot_helper (suid must be set on this binary)
  • /bin/sh or /bin/bash (default shell)

Tip: Limit the binaries which live in the jail to the absolute minimum required to improve security. Usually /bin/bash and /bin/sh is not required but some system may give out error.

The files need to be placed in the jail directory mimic their placement in the root file system. So you need to copy all required files. For example, copy /usr/bin/rssh to /users/usr/bin/rssh.

Building the Chrooted Jail

# mkdir -p /users/{dev,etc,lib,usr,bin}
# mkdir -p /users/usr/bin
# mkdir -p /users/libexec/openssh

# mknod -m 666 /users/dev/null c 1 3

# cd /users/etc
# cp /etc/ld.so.cache .
# cp -avr /etc/ld.so.cache.d/ .
# cp /etc/ld.so.conf .
# cp /etc/nsswitch.conf .
# cp /etc/passwd .
# cp /etc/group .
# cp /etc/hosts .
# cp /etc/resolv.conf .

Open /usres/group and /users/passwd file and remove root and all other accounts.

# cd /users/usr/bin
# cp /usr/bin/scp .
# cp /usr/bin/rssh .
# cp /usr/bin/sftp .
# cd /users/usr/libexec/openssh/
# cp /usr/libexec/openssh/sftp-server . # Or


# cp /usr/lib/openssh/sftp-server .
# cd /users/usr/libexec/
# cp /usr/libexec/rssh_chroot_helper # Or

# cp /usr/lib/rssh/rssh_chroot_helper
# cd /users/bin/
# cp /bin/sh . # Or

# cp /bin/bash .

The library files that these binaries need can be found by using ldd / strace. For example:
ldd /usr/bin/sftp

Copy all these libraries to /users/lib and other appropriate location. Or use l2chroot script:
# cd /sbin
# wget -O l2chroot http://www.cyberciti.biz/files/lighttpd/l2chroot.txt
# chmod +x l2chroot


Open l2chroot and set BASE variable to point to chroot directory (jail) location:
BASE="/users"

# l2chroot /usr/bin/scp
# l2chroot /usr/bin/rssh
# l2chroot /usr/bin/sftp
# l2chroot /usr/libexec/openssh/sftp-server # or


# l2chroot /usr/lib/openssh/sftp-server
# l2chroot /usr/libexec/rssh_chroot_helper # or

# l2chroot /usr/lib/rssh/rssh_chroot_helper
# l2chroot /bin/sh # or


# l2chroot /bin/bash # or

The syslog library function works by writing messages into a FIFO file such as /dev/log. You need to pass -a /path/to/chroot/dev/log option. This argument specify additional sockets to which syslogd has to listen. This is needed if some daemons will be running within a chroot() environment. Up to 19 additional sockets can be used. To have more, increase the symbol MAXFUNIX within the syslogd.c source file, find:
# vi /etc/sysconfig/syslog
SYSLOGD_OPTIONS="-m 0"
Append -a /users/dev/log
SYSLOGD_OPTIONS="-m 0 -a /users/dev/log"

# /etc/init.d/syslog restart
(For Debian/Ubuntu, apply changes to /etc/default/syslogd)

Set chrootpath to /users
# vi /etc/rssh.conf
chrootpath=/users
# /etc/init.d/sshd start

Add user to the jail:
# useradd -m -d /users/did -s /usr/bin/rssh didi

sftp> pwd
Remote working directory: /vivek
sftp> cd /tmp
Couldn't canonicalise: No such file or directory
Per User Configuration Options for chroot Jail

From the man page: The user configuration directive allows for the configuration of options on a per-user basis. THIS KEYWORD OVERRIDES ALL OTHER KEYWORDS FOR THE SPECIFIED USER. That is, if you use a user keyword for user foo, then foo will use only the settings in that user line, and not any of the settings set with the keywords above. The user keyword’s argument consists of a group of fields separated by a colon (:), as shown below. The fields are, in order:

  • username : The username of the user for whom the entry provides options
  • umask : The umask for this user, in octal, just as it would be specified to the shell access bits. Five binary digits, which indicate whether the user is allowed to use rsync, rdist, cvs, sftp, and scp, in that order. One means the command is allowed, zero means it is not.
  • path : The directory to which this user should be chrooted (this is not a command, it is a directory name).

rssh examples of configuring per-user options:

# vi /etc/rssh.conf
user=tom:077:00010 #
All user tom to bypass our chroot jail
user=jerry:011:00100 #
Provide jerry cvs access with no chroot
user=spike:011:10000 #
Provide spike rsync access with no chroot
user="tyke:011:00001:/users" #
Provide tyke access with chroot jail located at /users
user=nibbles:011:00001:"/usr/local/tv/shows/tom and jerry" #
Provide nibbles scp access with chroot directory, space must be quoted.

Sunday, April 27, 2008

Marry mutt and gmail

I found one of my colleagues showing off a console tool to read/send emails, its name is mutt and it looks wicked cool, so I want it!

I'll pick gmail as mutt's husband here, as it's probably the most popular email service nowadays. After a few hours setup marathoon, I finally got mutt pulling off gmails on my laptop:)... well, still get some works to do on filter, though.

First thing first, get everything setup, it can be lenghty, but worth my time.
http://www.andrews-corner.org/mutt.html

To do it real quick,

1. Get mutt (obviously), procmail, fetchmail
2. Get Gmail's SSL certificate
$ openssl s_client -connect smtp.gmail.com:995 -showcerts
paste the "BEGIN...END" code to ~/.certs/gmail.pem

3. This gets a little tricky, as the article doesn't elaborate clearly.
If you are using a Debian/Ubuntu, the instructions there. If you're using other distros like me, here is the way to get it straight
$ locate mutt|grep bundle
/usr/share/doc/mutt-1.5.17/ca-bundle.crt

copy the first instance of "BEGIN...END" (there should be many), paste to: ~/.certs/equifax.pem

wierd name... I don't know what it means, just do it.

4. I will skip some long boring stuff, go directly to: http://www.thawte.com/roots/
where you'll obtain a zip file, which include the third and the last certificate.
Unzip the file, go to the Thawte Server Roots directory, issue:

cp ThawtePremiumServerCA_b64.txt $HOME/.cert/ThawtePremiumServerCA.txt


5. Now all certificates are set, you'll need a tool called c_rehash, a perl script comes with openssl package.

$c_rehash $HOME/.certs/
$openssl s_client -connect pop.gmail.com:995 -CApath $HOME/.certs/

Unfortunately, my distro Fedora 8 isn't generous enough to provide this tool, and I couldn't find one googling...Out of nowhere, I found a guy, who had the same problem, translated it to a sh script... and it actually worked. Check it out here:

http://article.gmane.org/gmane.os.netbsd.devel.crypto/182

paste the script code .certs/c_rehash.sh, then:

$ chmod 744 c_rehash.sh
$ ./c_rehash.sh .
...

6. The hardest part ends here, the rest of works is all about setup the config files, just follow the article.

7. viola, mutt is online:)...

oh wait, I found my mutt, although working beautifully, couldn't handle the HTML mails...Fine, more googling ... ok, to get this done, I found this URL has very straightforward steps.

http://www.debian-administration.org/articles/75

Of course, this just gets mutt starting to work, mutt, is an extremely daunting tool for Linux /Unix newbies, as well as a great challenge to Unix vets.

However, the effort is worthy, remember the motto of mutt:
"All email clients suck, mutt sucks less" :)


Send an email with attachment:
$ mutt -s "Test mail" -a /tmp/attachment.tar.gz didi@toilet.com < /tmp/message.txt


To support more than 4 GB memory

By default the kernel's scope is capped at 4GB memory, it won't see more than this amount even if memory is physically there.

To extend kernel's sightseeing, the Kernel PAE package is needed, which provides support for up to 64GB of high memory. A CPU with Physical Address Extensions (PAE) is required, most modern CPUs support it so no big deal.

# yum install kernel-PAE
...

# init 0 (of course a reboot is needed)

Booting from SAN, bad idea?

SAN/iSCSI/NAS, or any other centralized network storage, work great for shared data or HA configuration. Although technically they can be used to boot, it's not recommended, unless you need diskless nodes.

Moreover, booting from SAN volumes is complicated, as most OSes are not designed for this kind of configuration. A huge scripting effort and booting procedure is required.

SAN vendor must support platform booting a Linux server. You need to configure HBA and SAN according to vendor specification. You must totally depend upon SAN vendor for drivers and firmware (HBA Bios) to get thing work properly. General principle - don’t put all your eggs in one basket err one vendor

Proper fiber channel topology must be used. Make sure Multipathing and redundant SAN links are used. The boot disk LUN is dedicated to a single host. etc

Check if a shell script is being run by root

BASH stores a user's ID in $UID variable. Your effective user ID is stored in $EUID variable.

#Old way: just add a simple check at the start of the script:
#!/bin/bash
# Init
FILE="/tmp/out.$$"
GREP="/bin/grep"
#....
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
#A new way by using EUID
#!/bin/bash
# Init
FILE="/tmp/out.$$"
GREP="/bin/grep"
#....
# Make sure only root can run our script
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
# ...
#only root can mount /dev/sdb1
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
else
mount /dev/sdb1 /mnt/disk2
fi

Shell completion dig-in

Fine tunning Shell Completing stuff with ESC key

Bash allows you to fine tune file completion using ESC key combinations. People get amazed when I use ESC combination in front of them. For example, to inserts all possible completions into your command use ESC+*. Let us see how to backup all /etc/*.conf files, type the command:
tar -zcvf /dev/rt0 /etc/*.conf {hit ESC followed by *}
As soon as you hit Esc+*, shell replaces the /etc/*.conf part with names of all matching wild card patterns
tar -zcvf /dev/rt0 /etc/aatv.conf /etc/adduser.conf /etc/apg.conf /etc/brltty.conf /etc/ca-certificates.conf /etc/cvs-cron.conf /etc/cvs-pserver.conf /etc/debconf.conf ....

To displays all possible completions of command or filenames or username type ESC+?, to display all username start with the word le, type
cat ~le {hit ESC followed by ?}

complete command

There is also in built command called complete. It is used to specify how arguments are to be completed for a command. For example, when you type passwd (or any other user admin command such as su / usermod etc) and hit tab key, bash will show you a list of all available users i.e. all user admin related commands will see only user names:
complete -u su usermod userdel passwd chage write chfn groups slay w
Now type passwd and hit tab key to see all username:

# passwd {hit tab key}
Output:

avahi          bin            dhcp           gdm            haldaemon      klog           mail           news           root           sys            uucp
avahi-autoipd cupsys dnsmasq gnats hplip list man nobody sshd syslog vivek
backup daemon games guest irc lp messagebus proxy sync telnetd www-data


To make life even easier, source the file in the startup script:
$ source /etc/bash_completion

Shell completion dig-in

Fine tunning Shell Completing stuff with ESC key

Bash allows you to fine tune file completion using ESC key combinations. People get amazed when I use ESC combination in front of them. For example, to inserts all possible completions into your command use ESC+*. Let us see how to backup all /etc/*.conf files, type the command:
tar -zcvf /dev/rt0 /etc/*.conf {hit ESC followed by *}
As soon as you hit Esc+*, shell replaces the /etc/*.conf part with names of all matching wild card patterns
tar -zcvf /dev/rt0 /etc/aatv.conf /etc/adduser.conf /etc/apg.conf /etc/brltty.conf /etc/ca-certificates.conf /etc/cvs-cron.conf /etc/cvs-pserver.conf /etc/debconf.conf ....

To displays all possible completions of command or filenames or username type ESC+?, to display all username start with the word le, type
cat ~le {hit ESC followed by ?}

complete command

There is also in built command called complete. It is used to specify how arguments are to be completed for a command. For example, when you type passwd (or any other user admin command such as su / usermod etc) and hit tab key, bash will show you a list of all available users i.e. all user admin related commands will see only user names:
complete -u su usermod userdel passwd chage write chfn groups slay w
Now type passwd and hit tab key to see all username:

# passwd {hit tab key}
Output:

avahi          bin            dhcp           gdm            haldaemon      klog           mail           news           root           sys            uucp
avahi-autoipd cupsys dnsmasq gnats hplip list man nobody sshd syslog vivek
backup daemon games guest irc lp messagebus proxy sync telnetd www-data


To make life even easier, source the file in the startup script:
$ source /etc/bash_completion

Reboot after kernel panic

By default, the kernel will not reboot after a panic, to change this.

# vi /etc/sysctl.conf

kernel.panic = 10 # reboot after 10 seconds

How to format and connect to an iSCSI Volume

Install open-iscsi package for high performance, transport independent iSCSI implementation. This package is also known as the Linux Open-iSCSI Initiator.

Kernel version 2.6.16+


Install Open-iSCSI Initiator

$ sudo apt-get install open-iscsi

Open-iSCSI default configuration

Default configuration file: /etc/iscsi/iscsid.conf or ~/.iscsid.conf.

# vi /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.auth.username = MY-ISCSI-USER
node.session.auth.password = MY-ISCSI-PASSWORD
discovery.sendtargets.auth.username = MY-ISCSI-USER
discovery.sendtargets.auth.password = MY-ISCSI-PASSWORD
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 10
node.conn[0].timeo.noop_out_timeout = 15
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536

# /etc/init.d/open-iscsi restart


Now run a discovery against the iscsi target host:
# iscsiadm -m discovery -t sendtargets -p ISCSI-SERVER-IP-ADDRESS

For example:
# iscsiadm -m discovery -t sendtargets -p 192.168.1.60

# /etc/init.d/open-iscsi restart

Format iSCSI Volume

Now an additional drive should appear on the system, such as /dev/sdc. To find out device name:

# tail -f /var/log/messages

Create a partition:
# fdisk /dev/sdc

Format partition:
# mkfs.ext3 /dev/sdc1

Mount file system:
# mkdir /iscsi
# mount /dev/sdc1 /iscsi



Force DHCP client (dhclient) to renew ip address

$ sudo dhclient -r # release current ip
$ sudo dhclient # acquire new ip

Restore a backup of a MySQL Database Server

# read the dump file back into the server

mysql db-name <>For example, to restore database called sales:

# First create the database sales:
$ mysql -u root -p

mysql> CREATE DATABASE sales;
mysql> quit;

# now restore database:
$ mysql -u root -p sales < /path/to/sales-backup-file.sql

Find the inode size of the filesystem

# tune2fs -l /dev/sda1 | grep Inode

Install Xcache for PHP5

XCache is a open-source opcode cacher, it accelerates/optimizes the performance of PHP on servers by removing the compilation time of PHP scripts by caching the compiled state of PHP scripts into the shm (RAM) and uses the compiled version straight from the RAM. This will increase the rate of page generation time by up to 5 times as it also optimizes many other aspects of php scripts and reduce serverload.

# Download latest stable release:
# wget http://xcache.lighttpd.net/pub/Releases/1.2.1/xcache-1.2.1.tar.gz

# Compile and install. Use phpize to prepare xcache as a PHP extension for compiling:
# phpize
# ./configure --enable-xcache
# make
# make install

# Default xcache.so install location

  • 64 bit PHP module installed at /usr/lib64/php/modules/xcache.so
  • 32 bit PHP module installed at /usr/lib/php/modules/xcache.so

# Create xcache.ini

Under RHEL / CentOS, php modules configuration are stored at /etc/php.d/:
# cd /etc/php.d/
# vi xcahce.ini #Append configuration directives:

[xcache-common]
; change me - 64 bit php => /usr/lib64/php/modules/xcache.so
; 32 bit php => /usr/lib/php/modules/xcache.so
zend_extension = /usr/lib64/php/modules/xcache.so

[xcache.admin]
xcache.admin.auth = On
xcache.admin.user = "mOo"
; xcache.admin.pass = md5($your_password)
xcache.admin.pass = ""

[xcache]
xcache.shm_scheme = "mmap"
xcache.size = 32M
xcache.count = 1
xcache.slots = 8K
xcache.ttl = 3600
xcache.gc_interval = 300

; Same as aboves but for variable cache
; If you don't know for sure that you need this, you probably don't
xcache.var_size = 0M
xcache.var_count = 1
xcache.var_slots = 8K
xcache.var_ttl = 0
xcache.var_maxttl = 0
xcache.var_gc_interval = 300

; N/A for /dev/zero
xcache.readonly_protection = Off

xcache.mmap_path = "/dev/zero"

xcache.cacher = On
xcache.stat = On

Alternatively, you can also copy default xcache.ini to /etc/php.d/
# cp xcache.ini /etc/php.d/
# vi /etc/php.d/xcache.ini

# service httpd restart
# service lighttpd restart

# Make sure xcache is working

$ php -v

You should see line "XCache v1.2.1, Copyright (c) 2005-2007, by mOo"

Tunnel X Windows through SSH

SSH is frequently used to tunnel insecure traffic over the Internet, and easy to user:

$ ssh -X user@server.corp.com
$ ssh -X user@202.54.1.2

You can requests compression of all data to improve up user experience (good for a low speed link such as wan link) using -c:
$ ssh -c -X user@202.54.1.2

Once logged in you can fire up any X windows program:
$ xeys &
$ oowriter &

To start kde type:
$ startkde &

To start default desktop type:
$ startx

Using httping to measure the Lateceny and Throughput of Webservers

# For measuring the latency. Press CTRL+c to exit the program.

$ httping -g URL [-p port]

You can also connect using SSL, for this to work you need to give a https url or a 443 portnumber:
$ httping -l -g https://www.cyberciti.biz # Or
$ httping -g http://www.cyberciti.biz -p 443

# Measure throughput of a webserver

-G forces GET request instead of a HEAD request - this means that also the complete page/file must be transferred. You also need to pass -b along with -G to get the transferspeed (in KB/s).
$ httping -Gbg http://www.cyberciti.biz/

...
--- http://www.cyberciti.biz/ ping statistics ---
53 connects, 53 ok, 0.00% failed
round-trip min/avg/max = 1451.9/2013.6/11656.0 ms
Transfer speed: min/avg/max = 6/19/24 KB

Please note the above command does not measure the latency!

You can also pass -X with -G to show the amount of data transferred (excluding the headers):
$ httping -XGbg http://www.cyberciti.biz/

...
--- http://www.cyberciti.biz/ ping statistics ---

7 connects, 7 ok, 0.00% failed
round-trip min/avg/max = 1507.7/1693.4/2620.3 ms
Transfer speed: min/avg/max = 9/21/24 KB

# Test remote server CPU

-B along with -G ask the webserver to compress the returned data - this will reduce the influence of the bandwidth of your connection while increasing the influence of the processorpower of the HTTP server.
$ httping -BGg http://www.cyberciti.biz/


# Flood server

-f flood ping i.e. do not sit idle between each ping but ping as fast as the computer and network allow you to (don't run this on production or 3rd party servers):
$ httping -fg http://www.cyberciti.biz/

Check Whether a Directory is Empty or Not

1. Print file name from /tmp. If there is no output, directory is empty.

$ find "/tmp" -type f -exec echo Found file {} \;

Found file /tmp/_.c
Found file /tmp/orbit-vivek/bonobo-activation-server-ior
...

2. The simplest and most effective way is to use ls with -A:

$ [ "$(ls -A /directory)" ] && echo "Not Empty" || echo "Empty" # Or

#!/bin/bash
FILE=""
DIR="/tmp"
# init
# look for empty dir
if [ "$(ls -A $DIR)" ]; then
echo "Take action $DIR is not Empty"
else
echo "$DIR is Empty"
fi
# rest of the logic

Detect Duplicate IP Address With arping command

A practical issue, especially for large network. arping sends ARP REQUEST to a neighbor host / computers.

$ sudo arping -I eth0 -c 3 192.168.1.1

ARPING 192.168.1.1 from 192.168.1.106 ra0
Unicast reply from 192.168.1.1 [00:18:39:6A:C6:8B] 2.232ms
Unicast reply from 192.168.1.1 [00:18:39:6A:C6:8B] 1.952ms
Sent 3 probes (1 broadcast(s))
Received 3 response(s)
  • -I eth0 : Specify network interface i.e. name of network device where to send ARP REQUEST packets. This option is required.
  • -c 3 : Stop after sending 3 ARP REQUEST packets

-D specifies duplicate address detection mode (DAD). It returns exit status 0, if DAD succeeded i.e. no replies are received. Always use following syntax for duplicate address detection with arping:
$ sudo arping -D -I <interface-name> -c 2 <IP-ADDRESS-TO-TEST>
$ echo $?

Understanding Bash fork() bomb ~ :(){ :|:& };:

This is horrible code for any Unix / Linux box. It is often used by sys admin to test user processes limitations (Linux process limits can be configured via /etc/security/limits.conf and PAM).

Once a successful fork bomb has been activated in a system it may not be possible to resume normal operation without rebooting, as the only solution to a fork bomb is to destroy all instances of it.

Here is more human readable code:

bomb() {
bomb | bomb &
}; bomb

Properly configured Linux / UNIX box should not go down when fork() bomb sets off, which can be prevented by limiting user processes. This is important for running a stable system. Just add user name or group or all users to /etc/security/limits.conf file and impose process limitations.

# This will prevent a "fork bomb":
vivek hard nproc 300
@student hard nproc 50
@faculty soft nproc 100
@pusers hard nproc 200

Above will prevent anyone in the student group from having more than 50 processes, faculty and pusers group limit is set to 100 and 200. Vivek can create only 300 process. Please note that KDE and Gnome can launch many process.

Calculating Subnets with ipcalc and sipcalc Utilities

Supported features include:

=> Multiple address and netmask input formats.
=> Retrieving of address information from interfaces.
=> Classfull and CIDR output.
=> Multiple address and netmask output formats (dotted quad, hex, number of bits).
=> Output of broadcast address, network class, Cisco wildcard, hosts/range, network range.
=> Output of multiple types of bitmaps.
=> Output of a user-defined number of extra networks.
=> Multiple networks input from commandline.
=> DNS resolutions of hostnames.
=> Compressed and expanded input addresses.
=> Compressed and expanded output.
=> Standard IPv6 network output.
=> v4 in v6 output.
=> Reverse dns address generation.
=> The ability to "split" a network based on a smaller netmask, also with recursive runs on the generated subnets.

[a] ipcalc takes an IP and netmask and calculates the resulting broadcast, network, Cisco wildcard mask, and host range. By giving a second netmask, you can design sub- and supernetworks. It is also intended to be a teaching tool and presents the results as easy to understand binary values.

[b] sipcalc is also an ip subnet calculator consisting of two parts. A plain text based console version, and web (cgi) based counterpart. Sipcalc, in it’s simplest form takes an IP and a subnet mask on the command line and outputs info about the subnet. It supports IP v4/v6 address.

$ ipcalc 192.168.0.1/24

Address:   192.168.0.1          11000000.10101000.00000000. 00000001
Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000
Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111
=>
Network: 192.168.0.0/24 11000000.10101000.00000000. 00000000
HostMin: 192.168.0.1 11000000.10101000.00000000. 00000001
HostMax: 192.168.0.254 11000000.10101000.00000000. 11111110
Broadcast: 192.168.0.255 11000000.10101000.00000000. 11111111
Hosts/Net: 254 Class C, Private Internet

$ sipcalc 192.168.0.1/24

-[ipv4 : 192.168.0.1/24] - 0

[CIDR]
Host address - 192.168.0.1
Host address (decimal) - 3232235521
Host address (hex) - C0A80001
Network address - 192.168.0.0
Network mask - 255.255.255.0
Network mask (bits) - 24
Network mask (hex) - FFFFFF00
Broadcast address - 192.168.0.255
Cisco wildcard - 0.0.0.255
Addresses in network - 256
Network range - 192.168.0.0 - 192.168.0.255
Usable range - 192.168.0.1 - 192.168.0.254

$ sipcalc 192.168.0.1 255.255.128.0 255.255.192.0

-[ipv4 : 192.168.0.1 255.255.128.0] - 0

[CIDR]
Host address - 192.168.0.1
Host address (decimal) - 3232235521
Host address (hex) - C0A80001
Network address - 192.168.0.0
Network mask - 255.255.128.0
Network mask (bits) - 17
Network mask (hex) - FFFF8000
Broadcast address - 192.168.127.255
Cisco wildcard - 0.0.127.255
Addresses in network - 32768
Network range - 192.168.0.0 - 192.168.127.255
Usable range - 192.168.0.1 - 192.168.127.254

-
-[ipv4 : 255.255.192.0] - 0

[CIDR]
Host address - 255.255.192.0
Host address (decimal) - 4294950912
Host address (hex) - FFFFC000
Network address - 255.255.192.0
Network mask - 255.255.255.255
Network mask (bits) - 32
Network mask (hex) - FFFFFFFF
Broadcast address - 255.255.192.0
Cisco wildcard - 0.0.0.0
Addresses in network - 1
Network range - 255.255.192.0 - 255.255.192.0

#Display results as HTML

$ ipcalc -h 192.168.0.1/255.255.128.0

$ man ipcalc
$ man sipcalc

Find the memory used by a pprocess using pmap command


By looking into /proc or using ps or top, you must calculate all memory usage yourself i.e. add Shared Memory + mapped file + total virtual memory size of the process + Resident Set Size + non-swapped physical memory used by process.

pmap reports the memory map of a process or processes. To display process mappings:
$ pmap pid

-x can be used to provide info about the memory allocation and mapping types per mapping. The amount of resident, non-shared anonymous, and locked memory.

Testing Connectivity with fping

You can read list of targets (hosts / servers) from a file. The -f option can only be used by the root user. Regular users should pipe in the file via I/O redirectors (stdin). For example:
$ fping < ~/.ping.conf

You can also netmask to ping the entire network i.e generate a target list from a supplied IP netmask. For example, ping the class C 192.168.1.x:
$ fping -g 192.168.1.0/24 # or
$ fping -g 192.168.1.0 192.168.1.255

# Send email if host is down
#!/bin/bash

HOSTS="router sun printer laptop sony-laptop xbox backup-server"
DLIST=""
for h in $HOSTS
do
fping -u $h >& /dev/null
if [ $? -ne 0 ]; then
echo ${h} host is down send email
# mail -s "Host ${h} down" admin@you.com

# Perform an action only on hosts that are currently reachable.

#!/usr/bin/perl
$myHosts = ‘cat /etc/hosts.backup | fping -a‘;
foreach $host (split(/\n/,$myHosts)) {
# take action or call other function
}