Wednesday, May 7, 2008

FreeNAS: a free Network-attached storage (NAS) server

It supports following protocols:
=> CIFS (Samba) - Mac / UNIX / Windows can use CIFS to store files
=> FTP
=> NFS
=> Rsync
=> AFP

You can use local user authentication, and software RAID (0,1,5), with a web-based configuration interface.
You can install software on hard disk or flash drive. A sample setup:
* Intel 1.7Ghz Celeron
* 1GB SDRam
* 3 x 80GB IDE Hard disk
* 10/100/1000 Mbps Lan Card
* Raid 5 with FreeNAS (Highest Read data transaction as I need to read lots of files such as MP3 / Photos and other data)
* BIOS is configured to boot from USB pen and w/o monitor / keyboard
* D-Link Gigabit Ethernet 8 Port Switche and Wireless AP router

FreeNAS is easy to use and perfect for building a low cost home file server, it can be used as:
=> Home Music / Media Server
=> Home Backup Server
=> Home Ftp Server
, etc.

Tuesday, May 6, 2008

Hidden switch: Display disk total statistics including time spent reading and writing data

Most Linux admins are not aware of vmstat's hidden switch (undocumented switch) called -D, which display a nice summery of disk I/O subsystem since boot time. Output includes total time spent reading and writing data, merged reads and merged writes (kernel disk i/o optimization technique) and other parameters.
$ vmstat -D
30 disks
8 partitions
13739406 total reads
3887474 merged reads
1180260353 read sectors
109630647 milli reading
118650080 writes
250115464 merged writes
2950543038 written sectors
2382220771 milli writing
0 inprogress IO
113832 milli spent IO

This is a backup server hosted at data center:
$ uptime
12:15:07 up 33 days, 9:15, 2 users, load average: 1.71, 1.49, 1.03

Since last 33 days the server spent total 1180260353 milliseconds reading disk. To get more detailed about disk I/O stats, use the -d option

Copy Set of Files to All Users Home Directory

First grab all user names from /etc/passwd:
cut -d: -f1 /etc/passwd

Next use a loop to apply copy and set the correct permissions on the file.

Finally, use id to obtain the correct user ID and group ID for each user.
USERS=$(cut -d':' -f1 /etc/passwd) # get list of all users
for u in $USERS
/bin/cp $FILE ${UHOME}/${u}
chown $(id -un $u):$(id -gn $u) /${UHOME}/${u}/${FILE}

You can also copy multiple files using inner and outer loop concept:
FILES="/etc/skel/.newconfig-file /etc/skek/.update-config /chroot/jail/.force.conf"
USERS=$(cut -d':' -f1 /etc/passwd) # get list of all users
for u in $USERS
for f in $FILES
/bin/cp ${f} ${UHOME}/${u}
chown $(id -un $u):$(id -gn $u) /${UHOME}/${u}/${f}

Add additional security check such as:
* User should be a normal user
* User must have a directory
* User must have a valid password / account.

Thursday, May 1, 2008

Setup, change and pimp out Linux / UNIX shell prompt

By default most distro displays hostname and current working directory. Prompt is control via a special shell variable. You need to set PS1, PS2, PS3 and PS4 variable. If set, the value is executed as a command prior to issuing each primary prompt.
* PS1 - The value is expanded and used as the primary prompt string. The default value is \s-\v\$ .
* PS2 - The value is expanded as with PS1 and used as the secondary prompt string. The default is >
* PS3 - The value is used as the prompt for the select command
* PS4 - The value expanded as with PS1 and the value is printed before each command bash displays during an execution trace. The first character of PS4 is replicated multiple times, as necessary, to indicate multiple levels of indirection. The default is +

To display current prompt setting:
$ echo $PS1

The prompt can be changed by assigning a new value to PS1.When executing interactively, bash displays the primary prompt PS1 when it is ready to read a command, and the secondary prompt PS2 when it needs more input to complete a command. Bash allows these prompt strings to be customized by inserting a number of backslash-escaped special characters that are decoded as follows:
* \a : an ASCII bell character (07)
* \d : the date in "Weekday Month Date" format (e.g., "Tue May 26")
* \D{format} : the format is passed to strftime(3) and the result is inserted into the prompt string; an empty format results in a locale-specific time representation. The braces are required
* \e : an ASCII escape character (033)

* \h : the hostname up to the first '.'
* \H : the hostname

* \j : the number of jobs currently managed by the shell
* \l : the basename of the shell’s terminal device name
* \n : newline
* \r : carriage return
* \s : the name of the shell, the basename of $0 (the portion following the final slash)
* \t : the current time in 24-hour HH:MM:SS format
* \T : the current time in 12-hour HH:MM:SS format
* \@ : the current time in 12-hour am/pm format
* \A : the current time in 24-hour HH:MM format
* \u : the username of the current user
* \v : the version of bash (e.g., 2.00)
* \V : the release of bash, version + patch level (e.g., 2.00.0)

* \w : the current working directory, with $HOME abbreviated with a tilde
* \W : the basename of the current working directory, with $HOME abbreviated with a tilde
* \! : the history number of this command
* \# : the command number of this command
* \$ : if the effective UID is 0, a #, otherwise a $
* \nnn : the character corresponding to the octal number nnn
* \\ : a backslash
* \[ : begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt
* \] : end a sequence of non-printing characters

To display today’d date and hostname in the prompt:

PS1="\d \h $ "
Sat Jun 02 server $

Now setup prompt to display date/time, hostname and current directory:
$ PS1="[\d \t \u@\h:\w ]
$ "
[Sat Jun 02 14:24:12 vivek@server:~ ] $

To add colors to prompt. For example, open /etc/bashrc (Redhatish) / or /etc/bash.bashrc (Debian) or /etc/bash.bashrc.local (Suse) file and append following code:

# If id command returns zero, you’ve root access.
if [ $(id -u) -eq 0 ]; then # root goes red
PS1="\\[$(tput setaf 1)\\]\\u@\\h:\\w #\\[$(tput sgr0)\\]"
else # normal
PS1="[\\u@\\h:\\w] $"

Of course, you can include whatever you like in the prompt. For example, when I switch to root:

To add colors to the shell prompt use the following export command syntax:
'\e[x;ym $PS1 \e[m'

* \e[ Start color scheme

* x;y Color pair to use (x;y)
* $PS1 is your shell prompt
* \e[m Stop color scheme

To set a red color prompt:
$ export PS1="\e[0;31m[\u@\h \W]\$ \e[m "

Color Code
Black 0;30
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33

Replace digit 0 with 1 to get light color version.

To make the prompt setting permanent, add above export command to your .bash_profile file or .bashrc file.
export PS1="\e[0;31m[\u@\h \W]\$ \e[m"

You can also use tput command. For example display RED prompt use tput as follows:
export PS1="\[$(tput setaf 1)\]\u@\h:\w $ \[$(tput sgr0)\]"

handy tput commands:
* tput bold - Bold effect
* tput rev - Display inverse colors
* tput sgr0 - Reset everything
* tput setaf {CODE}- Set foreground color, see color {CODE} below
* tput setab {CODE}- Set background color, see color {CODE} below

Colors {code} code for tput command
0 Black
1 Red
2 Green
3 Yellow
4 Blue
5 Magenta
6 Cyan
7 White

Read the man page of bash and tput for more information.

Demilitarized Zone Ethernet Interface Requirements and Configuration

Demilitarized zone, or DMZ, is used to secure an internal network from external access. You can use Linux firewall to create one easily. There are many different ways to design a network with a DMZ. The basic method is to use a single Linux firewall with 3 Ethernet cards. The following simple example discusses DMZ setup and forwarding public traffic to internal servers.

Consider the following DMZ host with 3 NIC:
[a] eth0 with private IP address - Internal LAN ~ Desktop system
[b] eth1 with public IP address - WAN connected to ISP router
[c] eth2 with private IP address - DMZ connected to Mail / Web / DNS and other private servers

Routing traffic between public and DMZ server, to set a rule for routing all incoming SMTP requests to a dedicated Mail server at, NAT calls a PREROUTING table to forward the packets to the proper destination.

This can be done with appropriate IPTABLES firewall rule to route traffic between LAN to DMZ and public interface to DMZ.

### end init firewall .. Start DMZ stuff ####
# forward traffic between DMZ and LAN
iptables -A FORWARD -i eth0 -o eth2 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth2 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT

# forward traffic between DMZ and WAN servers SMTP, Mail etc
iptables -A FORWARD -i eth2 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth1 -o eth2 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
# Route incoming SMTP (port 25 ) traffic to DMZ server
iptables -t nat -A PREROUTING -p tcp -i eth1 -d --dport 25 -j DNAT --to-destination
# Route incoming HTTP (port 80 ) traffic to DMZ server load balancer IP
iptables -t nat -A PREROUTING -p tcp -i eth1 -d --dport 80 -j DNAT --to-destination
# Route incoming HTTPS (port 443 ) traffic to DMZ server reverse load balancer IP
iptables -t nat -A PREROUTING -p tcp -i eth1 -d --dport 443 -j DNAT --to-destination
### End DMZ .. Add other rules ###

Multi port redirection, use multiport iptables module to matches a set of source or destination ports. Up to 15 ports can be specified. For example, route incoming HTTP (port 80 ) and HTTPS ( port 443) traffic to WAN server load balancer IP
iptables -t nat -A PREROUTING -p tcp -i eth1 -d -m multiport --dport 80,443 -j DNAT --to-destination

Above design has few pitfalls:
1. Single point of failure - The firewall becomes a single point of failure for the network.
2. Hardware - The firewall Host must be able to handle all of the traffic going to the DMZ as well as the internal network.

Set the DNS from the command line

Open /etc/resolv.conf and add the line:
nameserver {IP-OF-THE-DNS-1}
nameserver {IP-OF-THEISP-DNS-SERVER-2}

To test DNS configuration type any one of the following command:
$ host
$ dig
$ ping
$ nslookup

If you see valid output such as actual IP address or able to ping to remote server via hostname, it means that the dns is working for you. Also make sure you have valid default gateway setup, if you see the time out error.

Understanding the tr command

tr copies the given input to produced the output with substitution or deletion of selected characters. It is frequently used in shell scripts.

tr [options] "set1" "set2"
echo something | tr "set1" "set2"
tr "set1" "set2" < input.txt
tr "set1" "set2" <> output.txt

For example, translate 'linux' to upper-case:
$ echo 'linux' | tr "[:lower:]" "[:upper:]"
$ echo 'linux' | tr "a-z" "A-Z"
$ echo 'I LovE linuX. one is better Than 2' | tr "a-z" "A-Z"

Create a list of the words in /path/to/file, one per line:
$ tr -cs "[:alpha:]" "\n" < /path/to/file
* -c : Complement the set of characters in string1
* -s : Replace each input sequence of a repeated character that is listed in SET1 with a single occurrence of that character

In the following example user will get confirmation before deleting the file. If the user responds in lower case, tr will do nothing, but in upper case, the character will be changed to lower case. This will ensure that even if user responds with YES, YeS, YEs etc; script should remove file:
echo -n "Enter file name : "
read myfile
echo -n "Are you sure ( yes or no ) ? "
read confirmation
confirmation="$(echo ${confirmation} | tr 'A-Z' 'a-z')"
if [ "$confirmation" == "yes" ]; then
[ -f $myfile ] && /bin/rm $myfile || echo "Error - file $myfile not found"
: # do nothing

Remove all non-printable characters from myfile.txt
$ tr -cd "[:print:]" < myfile.txt

Remove all two more successive blank spaces from a copy of the text in a file called input.txt and save output to a new file called output.txt:
tr -s ' ' ' ' <> output.txt

The -d option is used to delete every instance of the string (i.e., sequence of characters) specified in set1. For example, the following would remove every instance of the word nameserver from a copy of the text in a file called /etc/resolv.conf and write the output to a file called ns.ipaddress.txt:
tr -d 'nameserver' < /etc/resolv.conf > ns.ipaddress.txt

Configure Jumbo Frames to Boost Network Performance / Throughput

Jumbo frames are Ethernet frames with more than 1500 bytes of payload MTU. Most modern Linux distros (Kernel 2.6.17+) support frames larger than 1500 bytes.

First, make sure your network driver supports custom MTU. Second you need to have a compatible gigabit NIC and switch (such as Cisco Catalyst 4000/4500 Switches with Supervisor III or Supervisor IV ) that is jumbo frame clean.

Jumbo frames can reduce server overhead such as a big decrease in CPU usage when transferring larger file. Also you should see some increase in network throughput.

Configure eth0 interface for Jumbo Frames. Warning! These examples depend upon a compatible network gear such gigabit Ethernet switches and gigabit Ethernet network interface cards.
# ifconfig eth0 mtu 9000

To make changes permanent, append the
directive into the network configuration file for eth0.
# vi /etc/sysconfig/network-script/ifcfg-eth0 # Redhat/CentOS/Fedora

# vi /etc/network/interfaces # Debian/Ubuntu

# /etc/init.d/networking restart

To confirm the MTU used between two specific devices:
# ip route get {IP-address}
# ip route get dev eth0 src
cache mtu 9000 advmss 1460 hoplimit 64

Application Protocol Tunning. You may need to tune the application/network protocol such as NFS and SMB to take advantage of Jumbo Frames.

Monitor and debug system calls using strace

strace is useful for diagnostic, instructional, and debugging. System admins, diagnosticians and troubleshooters will find it invaluable for solving problems with programs for which the source is not readily available.

strace for Linux, added many of the features of Solaris/FreeBSD's truss command from SVR4, and produced an strace that worked on both platforms.

$ strace -o /tmp/output.txt /bin/bash
$ grep '^open' /tmp/output.txt

To see only a trace of the open, close, read, and write system calls, enter:
$ strace -e trace=open,close,read,write df > output.txt

Another good option is ltrace - its use is very similar to strace command.

Wednesday, April 30, 2008

Find out Linux Disk utilization

You can use iostat to find out disk utilization and for monitoring system input/output device loading by observing the time the physical disks are active in relation to their average transfer rates.

iostat -d -x interval count

  • -d : Display the device utilization report (d == disk)
  • -x : Display extended statistics including disk utilization
  • interval : It is time period in seconds between two samples . iostat 2 will give data at each 2 seconds interval.
  • count : It is the number of times the data is needed . iostat 2 5 will give data at 2 seconds interval 5 times

Display 3 reports of extended statistics at 5 second intervals for disk:
$iostat -d -x 5 3

Linux 2.6.18-53.1.4.el5 (   12/17/2007

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 1.10 39.82 3.41 13.59 309.50 427.48 43.36 0.17 10.03 1.03 1.75
sdb 0.20 18.32 1.15 6.08 117.36 195.25 43.22 0.51 71.14 1.26 0.91

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 108.40 1.40 64.40 49.60 1382.40 21.76 0.04 0.67 0.44 2.92
sdb 0.00 37.80 0.00 245.20 0.00 2254.40 9.19 28.91 108.49 1.08 26.36

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 97.01 1.00 57.29 39.92 1234.33 21.86 0.03 0.58 0.50 2.89
sdb 0.00 38.32 0.00 288.42 0.00 2623.55 9.10 32.97 122.30 1.15 33.27


  • rrqm/s : The number of read requests merged per second that were queued to the hard disk
  • wrqm/s : The number of write requests merged per second that were queued to the hard disk
  • r/s : The number of read requests per second
  • w/s : The number of write requests per second
  • rsec/s : The number of sectors read from the hard disk per second
  • wsec/s : The number of sectors written to the hard disk per second
  • avgrq-sz : The average size (in sectors) of the requests that were issued to the device.
  • avgqu-sz : The average queue length of the requests that were issued to the device
  • await : The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
  • svctm : The average service time (in milliseconds) for I/O requests that were issued to the device
  • %util : Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%.

To interpret the output result for optimization, first note down following values from the iostat output:

  1. The average service time (svctm)
  2. Percentage of CPU time during which I/O requests were issued (%util)
  3. See if a hard disk reports consistently high reads/writes (r/s and w/s)

If any one of these are high, take one of the following action:

  • Get high speed disk and controller for file system (for example move from SATA I to SAS 15k disk)
  • Tune software or application or kernel or file system for better disk utilization
  • Use RAID array to spread the file system

For example, from about iostat report it appears that /dev/sdb under load.

Configure Linux to track and log failed login attempt recoreds

You can use faillog command to display faillog records or to .set login failure limits. Without arguments it display only list of user faillog records who have ever had a login failure.

PAM Settings: Under RHEL/CentOS 5.x, you need to modify /etc/pam.d/system-auth. You need to configure PAM module Otherwise faillog will never display failed login attempts.

PAM Configuration to recored failed login attempts: maintains a count of attempted accesses, can reset count on success, can deny access if too many attempts fail.

# vi /etc/pam.d/system-auth # Append
auth required no_magic_root
account required deny=3 no_magic_root lock_time=180


  • deny=3 : Deny access if tally for this user exceeds 3 times.
  • lock_time=180 : Always deny for 180 seconds after failed attempt. There is also unlock_time=n option. It allow access after n seconds after failed attempt. If this option is used the user will be locked out for the specified amount of time after he exceeded his maximum allowed attempts. Otherwise the account is locked until the lock is removed by a manual intervention of the system administrator.
  • magic_root : If the module is invoked by a user with uid=0 the counter is not incremented. The sys-admin should use this for user launched services, like su, otherwise this argument should be omitted.
  • no_magic_root : Avoid root account locking, if the module is invoked by a user with uid=0

Display all failed login attempts for a user
# faillog -u didi

Display faillog records for all users
# faillog -a

-r can reset the counters of login failures or one record if used with the -u USERNAME option:

# faillog -r
# faillog -r -u vivek <-- only reset counter for vivek user

On large Linux login server, it might be useful to clear all counts every midnight or week from a cron job
# crontab -e
@weekly /usr/bin/faillog -r

Clonezilla Ghost - The Linux answer to Norton Ghost

Norton Ghost is slow. There is Norton Ghost Corporate Edition with multicasting and fast cloning system.

Clonezilla is a free, open source, and multitasking. With DRBL (Diskless Remote Boot in Linux [DRBL] provides a diskless or systemless environment for client machines) and network boot enabled client computers, the only thing you have to prepare is a Clonezilla server. The best thing is ~ you do not even have to prepare a bootable CD or floppy with Partition Image for every client computer.

Download Clonezilla
A simple howto on setting up Clonezilla/DRBL on Ubuntu Linux
A simple documentation to quickly setup up Clonezilla on a PXE server

Copy hard disk or partition image to another system using a network and netcat

netcat is a swiss army knife in networking. It is designed to be a reliable "back-end" tool that can be used directly or easily driven by other programs/scripts, as well as a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.

One handy and trusted use is to migrating data between two server hard drives. You can also use ssh for the same purpose, but encryption adds its own overheads.

A sample setup:

HostA //
HostB //

To copy /dev/sda on HostA to /dev/sdb on HostB, first login as root.

On hostB (receiving end ~ write image mode), open port:
# netcat -p 2222 -l |bzip2 -d | dd of=/dev/sdb


  • -p 2222 : the source port, subject to privilege restrictions and availability. Make sure port 2222 is not used by another process.
  • -l : listen for an incoming connection rather than initiate a connection to a remote host.
  • bzip2 -d : Compresses image using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. This will speed up network transfer ( -d : force decompression mode)
  • dd of=/dev/sda : The hard disk. You can also specify partition such as /dev/sda1

On hostA (send data over a network ~ read image mode), login as root:

# bzip2 -c /dev/sda | netcat hostA 2222 # OR use IP:
# bzip2 -c /dev/sda | netcat 2222

Apparently, this process takes its own time.

A note about latest netcat 1.84-10 and above: Above syntax will generate an error. It is an error to use -l in conjunction with the -p, -s, or -z. Additionally, any timeouts specified with -w are ignored. So use nc:

On hostA:
# nc -l 2222 > /dev/sdb
On hostB:
# nc hostA 2222< /dev/sda # Or
# nc 2222< /dev/sda

Using hostB, connect to the listening nc process at 2222 (hostA), feeding it the file (/dev/sda) which is to be transferred.
On hostA:
# nc -l 2222 | bzip2 -d > /dev/sdb
On hostB:
# bzip2 -c /dev/sda | nc 2222

Restrict Users to SCP and SFTP and Block SSH Shell Access with rssh

FTP is insecure protocol but often needed. OpenSSH Server can transfer file using SCP and SFTP (secure ftp) but it also grants ssh shell access, which is not desirable. Here is how sftp works:
SCP/SFTP -> sshd -> Call sftpd subsystem -> Requires a shell -> User login

rssh is a restricted shell for providing limited access to a host via ssh, it allows following operations only:

  • scp - Secure file copy
  • sftp - Secure FTP
  • cvs - Easily retrieve old versions to see exactly which change caused the bug
  • rsync - Backup and sync file system
  • rdist - Backup / RDist program maintains identical copies of files on multiple hosts.
  • Default configuration file is located at /etc/rssh.conf (FreeBSD - /usr/local/etc/rssh.conf)
  • Default rssh binary location /usr/bin/rssh (FreeBSD - /usr/local/bin/rssh)
  • Default port none - ( openssh 22 port used - rssh is shell with security features)
All you have to do is to set a user's shell to /usr/bin/rssh. For example:

Login as the root user, create a new user
# useradd -m -d /home/didi -s /usr/bin/rssh didi

Change user's shell:
# usermod -s /usr/bin/rssh didi # or
# chsh -s /usr/bin/rssh didivivek

Now try login via ssh or sftp using username didi:
$ sftp # or
$ ssh

This account is restricted by rssh.
This user is locked out.
Connection to closed.

By default rssh locks down everything including any sort of access. Now grant access to sftp and scp for all users by appending:
# vi /etc/rssh.conf

rssh reads configuration on fly (there is no rssh daemon). Now user can run scp and sftp, without shell access:
$ scp /path/to/file # or
$ sftp

Connecting to's password:
sftp> pwd
Remote working directory: /home/didi

Understanding command configuration options

  • allowscp : Tells the shell that scp is allowed.
  • allowsftp : Tells the shell that sftp is allowed.
  • allowcvs : Tells the shell that cvs is allowed.
  • allowrdist : Tells the shell that rdist is allowed.
  • allowrsync : Tells the shell that rsync is allowed.

Tip: Create a group for rssh users, and limit executable access to the binaries to users in that group to improve security. Use standard file permissions carefully and appropriately.

Chroot jail to lock users to their home only

rssh's chrootpath option is used to set the root of the chroot jail. A chroot affects only the current process and its children. If the default home directory is /home/didi, normal user can access files in /etc, /sbin or /bin. This allows an attacker to install sneaky things via your web server in /tmp.

=> Chroot directory: /users.
Tip: If possible mount /users filesystem with the noexec/nosuid option to improve security.

=> Required directories in jail:

  • /users/dev - Device file
  • /users/etc - Configuration file such as passwd
  • /users/lib - Shared libs
  • /users/usr - rssh and other binaries
  • /users/bin - Copy default shell such as /bin/csh or /bin/bash

=> Required files in jail at /users (default for RHEL / CentOS / Debian Linux):

  • /etc/
  • /etc/*
  • /etc/
  • /etc/nsswitch.conf
  • /etc/passwd
  • /etc/group
  • /etc/hosts
  • /etc/resolv.conf
  • /usr/bin/scp
  • /usr/bin/rssh
  • /usr/bin/sftp
  • /usr/libexec/openssh/sftp-server OR /usr/lib/openssh/sftp-server
  • /usr/libexec/rssh_chroot_helper OR /usr/lib/rssh/rssh_chroot_helper (suid must be set on this binary)
  • /bin/sh or /bin/bash (default shell)

Tip: Limit the binaries which live in the jail to the absolute minimum required to improve security. Usually /bin/bash and /bin/sh is not required but some system may give out error.

The files need to be placed in the jail directory mimic their placement in the root file system. So you need to copy all required files. For example, copy /usr/bin/rssh to /users/usr/bin/rssh.

Building the Chrooted Jail

# mkdir -p /users/{dev,etc,lib,usr,bin}
# mkdir -p /users/usr/bin
# mkdir -p /users/libexec/openssh

# mknod -m 666 /users/dev/null c 1 3

# cd /users/etc
# cp /etc/ .
# cp -avr /etc/ .
# cp /etc/ .
# cp /etc/nsswitch.conf .
# cp /etc/passwd .
# cp /etc/group .
# cp /etc/hosts .
# cp /etc/resolv.conf .

Open /usres/group and /users/passwd file and remove root and all other accounts.

# cd /users/usr/bin
# cp /usr/bin/scp .
# cp /usr/bin/rssh .
# cp /usr/bin/sftp .
# cd /users/usr/libexec/openssh/
# cp /usr/libexec/openssh/sftp-server . # Or

# cp /usr/lib/openssh/sftp-server .
# cd /users/usr/libexec/
# cp /usr/libexec/rssh_chroot_helper # Or

# cp /usr/lib/rssh/rssh_chroot_helper
# cd /users/bin/
# cp /bin/sh . # Or

# cp /bin/bash .

The library files that these binaries need can be found by using ldd / strace. For example:
ldd /usr/bin/sftp

Copy all these libraries to /users/lib and other appropriate location. Or use l2chroot script:
# cd /sbin
# wget -O l2chroot
# chmod +x l2chroot

Open l2chroot and set BASE variable to point to chroot directory (jail) location:

# l2chroot /usr/bin/scp
# l2chroot /usr/bin/rssh
# l2chroot /usr/bin/sftp
# l2chroot /usr/libexec/openssh/sftp-server # or

# l2chroot /usr/lib/openssh/sftp-server
# l2chroot /usr/libexec/rssh_chroot_helper # or

# l2chroot /usr/lib/rssh/rssh_chroot_helper
# l2chroot /bin/sh # or

# l2chroot /bin/bash # or

The syslog library function works by writing messages into a FIFO file such as /dev/log. You need to pass -a /path/to/chroot/dev/log option. This argument specify additional sockets to which syslogd has to listen. This is needed if some daemons will be running within a chroot() environment. Up to 19 additional sockets can be used. To have more, increase the symbol MAXFUNIX within the syslogd.c source file, find:
# vi /etc/sysconfig/syslog
Append -a /users/dev/log
SYSLOGD_OPTIONS="-m 0 -a /users/dev/log"

# /etc/init.d/syslog restart
(For Debian/Ubuntu, apply changes to /etc/default/syslogd)

Set chrootpath to /users
# vi /etc/rssh.conf
# /etc/init.d/sshd start

Add user to the jail:
# useradd -m -d /users/did -s /usr/bin/rssh didi

sftp> pwd
Remote working directory: /vivek
sftp> cd /tmp
Couldn't canonicalise: No such file or directory
Per User Configuration Options for chroot Jail

From the man page: The user configuration directive allows for the configuration of options on a per-user basis. THIS KEYWORD OVERRIDES ALL OTHER KEYWORDS FOR THE SPECIFIED USER. That is, if you use a user keyword for user foo, then foo will use only the settings in that user line, and not any of the settings set with the keywords above. The user keyword’s argument consists of a group of fields separated by a colon (:), as shown below. The fields are, in order:

  • username : The username of the user for whom the entry provides options
  • umask : The umask for this user, in octal, just as it would be specified to the shell access bits. Five binary digits, which indicate whether the user is allowed to use rsync, rdist, cvs, sftp, and scp, in that order. One means the command is allowed, zero means it is not.
  • path : The directory to which this user should be chrooted (this is not a command, it is a directory name).

rssh examples of configuring per-user options:

# vi /etc/rssh.conf
user=tom:077:00010 #
All user tom to bypass our chroot jail
user=jerry:011:00100 #
Provide jerry cvs access with no chroot
user=spike:011:10000 #
Provide spike rsync access with no chroot
user="tyke:011:00001:/users" #
Provide tyke access with chroot jail located at /users
user=nibbles:011:00001:"/usr/local/tv/shows/tom and jerry" #
Provide nibbles scp access with chroot directory, space must be quoted.

Sunday, April 27, 2008

Marry mutt and gmail

I found one of my colleagues showing off a console tool to read/send emails, its name is mutt and it looks wicked cool, so I want it!

I'll pick gmail as mutt's husband here, as it's probably the most popular email service nowadays. After a few hours setup marathoon, I finally got mutt pulling off gmails on my laptop:)... well, still get some works to do on filter, though.

First thing first, get everything setup, it can be lenghty, but worth my time.

To do it real quick,

1. Get mutt (obviously), procmail, fetchmail
2. Get Gmail's SSL certificate
$ openssl s_client -connect -showcerts
paste the "BEGIN...END" code to ~/.certs/gmail.pem

3. This gets a little tricky, as the article doesn't elaborate clearly.
If you are using a Debian/Ubuntu, the instructions there. If you're using other distros like me, here is the way to get it straight
$ locate mutt|grep bundle

copy the first instance of "BEGIN...END" (there should be many), paste to: ~/.certs/equifax.pem

wierd name... I don't know what it means, just do it.

4. I will skip some long boring stuff, go directly to:
where you'll obtain a zip file, which include the third and the last certificate.
Unzip the file, go to the Thawte Server Roots directory, issue:

cp ThawtePremiumServerCA_b64.txt $HOME/.cert/ThawtePremiumServerCA.txt

5. Now all certificates are set, you'll need a tool called c_rehash, a perl script comes with openssl package.

$c_rehash $HOME/.certs/
$openssl s_client -connect -CApath $HOME/.certs/

Unfortunately, my distro Fedora 8 isn't generous enough to provide this tool, and I couldn't find one googling...Out of nowhere, I found a guy, who had the same problem, translated it to a sh script... and it actually worked. Check it out here:

paste the script code .certs/, then:

$ chmod 744
$ ./ .

6. The hardest part ends here, the rest of works is all about setup the config files, just follow the article.

7. viola, mutt is online:)...

oh wait, I found my mutt, although working beautifully, couldn't handle the HTML mails...Fine, more googling ... ok, to get this done, I found this URL has very straightforward steps.

Of course, this just gets mutt starting to work, mutt, is an extremely daunting tool for Linux /Unix newbies, as well as a great challenge to Unix vets.

However, the effort is worthy, remember the motto of mutt:
"All email clients suck, mutt sucks less" :)

Send an email with attachment:
$ mutt -s "Test mail" -a /tmp/attachment.tar.gz < /tmp/message.txt

To support more than 4 GB memory

By default the kernel's scope is capped at 4GB memory, it won't see more than this amount even if memory is physically there.

To extend kernel's sightseeing, the Kernel PAE package is needed, which provides support for up to 64GB of high memory. A CPU with Physical Address Extensions (PAE) is required, most modern CPUs support it so no big deal.

# yum install kernel-PAE

# init 0 (of course a reboot is needed)

Booting from SAN, bad idea?

SAN/iSCSI/NAS, or any other centralized network storage, work great for shared data or HA configuration. Although technically they can be used to boot, it's not recommended, unless you need diskless nodes.

Moreover, booting from SAN volumes is complicated, as most OSes are not designed for this kind of configuration. A huge scripting effort and booting procedure is required.

SAN vendor must support platform booting a Linux server. You need to configure HBA and SAN according to vendor specification. You must totally depend upon SAN vendor for drivers and firmware (HBA Bios) to get thing work properly. General principle - don’t put all your eggs in one basket err one vendor

Proper fiber channel topology must be used. Make sure Multipathing and redundant SAN links are used. The boot disk LUN is dedicated to a single host. etc

Check if a shell script is being run by root

BASH stores a user's ID in $UID variable. Your effective user ID is stored in $EUID variable.

#Old way: just add a simple check at the start of the script:
# Init
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
#A new way by using EUID
# Init
# Make sure only root can run our script
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
# ...
#only root can mount /dev/sdb1
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
mount /dev/sdb1 /mnt/disk2

Shell completion dig-in

Fine tunning Shell Completing stuff with ESC key

Bash allows you to fine tune file completion using ESC key combinations. People get amazed when I use ESC combination in front of them. For example, to inserts all possible completions into your command use ESC+*. Let us see how to backup all /etc/*.conf files, type the command:
tar -zcvf /dev/rt0 /etc/*.conf {hit ESC followed by *}
As soon as you hit Esc+*, shell replaces the /etc/*.conf part with names of all matching wild card patterns
tar -zcvf /dev/rt0 /etc/aatv.conf /etc/adduser.conf /etc/apg.conf /etc/brltty.conf /etc/ca-certificates.conf /etc/cvs-cron.conf /etc/cvs-pserver.conf /etc/debconf.conf ....

To displays all possible completions of command or filenames or username type ESC+?, to display all username start with the word le, type
cat ~le {hit ESC followed by ?}

complete command

There is also in built command called complete. It is used to specify how arguments are to be completed for a command. For example, when you type passwd (or any other user admin command such as su / usermod etc) and hit tab key, bash will show you a list of all available users i.e. all user admin related commands will see only user names:
complete -u su usermod userdel passwd chage write chfn groups slay w
Now type passwd and hit tab key to see all username:

# passwd {hit tab key}

avahi          bin            dhcp           gdm            haldaemon      klog           mail           news           root           sys            uucp
avahi-autoipd cupsys dnsmasq gnats hplip list man nobody sshd syslog vivek
backup daemon games guest irc lp messagebus proxy sync telnetd www-data

To make life even easier, source the file in the startup script:
$ source /etc/bash_completion

Shell completion dig-in

Fine tunning Shell Completing stuff with ESC key

Bash allows you to fine tune file completion using ESC key combinations. People get amazed when I use ESC combination in front of them. For example, to inserts all possible completions into your command use ESC+*. Let us see how to backup all /etc/*.conf files, type the command:
tar -zcvf /dev/rt0 /etc/*.conf {hit ESC followed by *}
As soon as you hit Esc+*, shell replaces the /etc/*.conf part with names of all matching wild card patterns
tar -zcvf /dev/rt0 /etc/aatv.conf /etc/adduser.conf /etc/apg.conf /etc/brltty.conf /etc/ca-certificates.conf /etc/cvs-cron.conf /etc/cvs-pserver.conf /etc/debconf.conf ....

To displays all possible completions of command or filenames or username type ESC+?, to display all username start with the word le, type
cat ~le {hit ESC followed by ?}

complete command

There is also in built command called complete. It is used to specify how arguments are to be completed for a command. For example, when you type passwd (or any other user admin command such as su / usermod etc) and hit tab key, bash will show you a list of all available users i.e. all user admin related commands will see only user names:
complete -u su usermod userdel passwd chage write chfn groups slay w
Now type passwd and hit tab key to see all username:

# passwd {hit tab key}

avahi          bin            dhcp           gdm            haldaemon      klog           mail           news           root           sys            uucp
avahi-autoipd cupsys dnsmasq gnats hplip list man nobody sshd syslog vivek
backup daemon games guest irc lp messagebus proxy sync telnetd www-data

To make life even easier, source the file in the startup script:
$ source /etc/bash_completion

Reboot after kernel panic

By default, the kernel will not reboot after a panic, to change this.

# vi /etc/sysctl.conf

kernel.panic = 10 # reboot after 10 seconds

How to format and connect to an iSCSI Volume

Install open-iscsi package for high performance, transport independent iSCSI implementation. This package is also known as the Linux Open-iSCSI Initiator.

Kernel version 2.6.16+

Install Open-iSCSI Initiator

$ sudo apt-get install open-iscsi

Open-iSCSI default configuration

Default configuration file: /etc/iscsi/iscsid.conf or ~/.iscsid.conf.

# vi /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.auth.username = MY-ISCSI-USER
node.session.auth.password = MY-ISCSI-PASSWORD
discovery.sendtargets.auth.username = MY-ISCSI-USER
discovery.sendtargets.auth.password = MY-ISCSI-PASSWORD
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 10
node.conn[0].timeo.noop_out_timeout = 15
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536

# /etc/init.d/open-iscsi restart

Now run a discovery against the iscsi target host:
# iscsiadm -m discovery -t sendtargets -p ISCSI-SERVER-IP-ADDRESS

For example:
# iscsiadm -m discovery -t sendtargets -p

# /etc/init.d/open-iscsi restart

Format iSCSI Volume

Now an additional drive should appear on the system, such as /dev/sdc. To find out device name:

# tail -f /var/log/messages

Create a partition:
# fdisk /dev/sdc

Format partition:
# mkfs.ext3 /dev/sdc1

Mount file system:
# mkdir /iscsi
# mount /dev/sdc1 /iscsi

Force DHCP client (dhclient) to renew ip address

$ sudo dhclient -r # release current ip
$ sudo dhclient # acquire new ip

Restore a backup of a MySQL Database Server

# read the dump file back into the server

mysql db-name <>For example, to restore database called sales:

# First create the database sales:
$ mysql -u root -p

mysql> CREATE DATABASE sales;
mysql> quit;

# now restore database:
$ mysql -u root -p sales < /path/to/sales-backup-file.sql

Find the inode size of the filesystem

# tune2fs -l /dev/sda1 | grep Inode

Install Xcache for PHP5

XCache is a open-source opcode cacher, it accelerates/optimizes the performance of PHP on servers by removing the compilation time of PHP scripts by caching the compiled state of PHP scripts into the shm (RAM) and uses the compiled version straight from the RAM. This will increase the rate of page generation time by up to 5 times as it also optimizes many other aspects of php scripts and reduce serverload.

# Download latest stable release:
# wget

# Compile and install. Use phpize to prepare xcache as a PHP extension for compiling:
# phpize
# ./configure --enable-xcache
# make
# make install

# Default install location

  • 64 bit PHP module installed at /usr/lib64/php/modules/
  • 32 bit PHP module installed at /usr/lib/php/modules/

# Create xcache.ini

Under RHEL / CentOS, php modules configuration are stored at /etc/php.d/:
# cd /etc/php.d/
# vi xcahce.ini #Append configuration directives:

; change me - 64 bit php => /usr/lib64/php/modules/
; 32 bit php => /usr/lib/php/modules/
zend_extension = /usr/lib64/php/modules/

xcache.admin.auth = On
xcache.admin.user = "mOo"
; xcache.admin.pass = md5($your_password)
xcache.admin.pass = ""

xcache.shm_scheme = "mmap"
xcache.size = 32M
xcache.count = 1
xcache.slots = 8K
xcache.ttl = 3600
xcache.gc_interval = 300

; Same as aboves but for variable cache
; If you don't know for sure that you need this, you probably don't
xcache.var_size = 0M
xcache.var_count = 1
xcache.var_slots = 8K
xcache.var_ttl = 0
xcache.var_maxttl = 0
xcache.var_gc_interval = 300

; N/A for /dev/zero
xcache.readonly_protection = Off

xcache.mmap_path = "/dev/zero"

xcache.cacher = On
xcache.stat = On

Alternatively, you can also copy default xcache.ini to /etc/php.d/
# cp xcache.ini /etc/php.d/
# vi /etc/php.d/xcache.ini

# service httpd restart
# service lighttpd restart

# Make sure xcache is working

$ php -v

You should see line "XCache v1.2.1, Copyright (c) 2005-2007, by mOo"

Tunnel X Windows through SSH

SSH is frequently used to tunnel insecure traffic over the Internet, and easy to user:

$ ssh -X
$ ssh -X user@

You can requests compression of all data to improve up user experience (good for a low speed link such as wan link) using -c:
$ ssh -c -X user@

Once logged in you can fire up any X windows program:
$ xeys &
$ oowriter &

To start kde type:
$ startkde &

To start default desktop type:
$ startx

Using httping to measure the Lateceny and Throughput of Webservers

# For measuring the latency. Press CTRL+c to exit the program.

$ httping -g URL [-p port]

You can also connect using SSL, for this to work you need to give a https url or a 443 portnumber:
$ httping -l -g # Or
$ httping -g -p 443

# Measure throughput of a webserver

-G forces GET request instead of a HEAD request - this means that also the complete page/file must be transferred. You also need to pass -b along with -G to get the transferspeed (in KB/s).
$ httping -Gbg

--- ping statistics ---
53 connects, 53 ok, 0.00% failed
round-trip min/avg/max = 1451.9/2013.6/11656.0 ms
Transfer speed: min/avg/max = 6/19/24 KB

Please note the above command does not measure the latency!

You can also pass -X with -G to show the amount of data transferred (excluding the headers):
$ httping -XGbg

--- ping statistics ---

7 connects, 7 ok, 0.00% failed
round-trip min/avg/max = 1507.7/1693.4/2620.3 ms
Transfer speed: min/avg/max = 9/21/24 KB

# Test remote server CPU

-B along with -G ask the webserver to compress the returned data - this will reduce the influence of the bandwidth of your connection while increasing the influence of the processorpower of the HTTP server.
$ httping -BGg

# Flood server

-f flood ping i.e. do not sit idle between each ping but ping as fast as the computer and network allow you to (don't run this on production or 3rd party servers):
$ httping -fg

Check Whether a Directory is Empty or Not

1. Print file name from /tmp. If there is no output, directory is empty.

$ find "/tmp" -type f -exec echo Found file {} \;

Found file /tmp/_.c
Found file /tmp/orbit-vivek/bonobo-activation-server-ior

2. The simplest and most effective way is to use ls with -A:

$ [ "$(ls -A /directory)" ] && echo "Not Empty" || echo "Empty" # Or

# init
# look for empty dir
if [ "$(ls -A $DIR)" ]; then
echo "Take action $DIR is not Empty"
echo "$DIR is Empty"
# rest of the logic

Detect Duplicate IP Address With arping command

A practical issue, especially for large network. arping sends ARP REQUEST to a neighbor host / computers.

$ sudo arping -I eth0 -c 3

ARPING from ra0
Unicast reply from [00:18:39:6A:C6:8B] 2.232ms
Unicast reply from [00:18:39:6A:C6:8B] 1.952ms
Sent 3 probes (1 broadcast(s))
Received 3 response(s)
  • -I eth0 : Specify network interface i.e. name of network device where to send ARP REQUEST packets. This option is required.
  • -c 3 : Stop after sending 3 ARP REQUEST packets

-D specifies duplicate address detection mode (DAD). It returns exit status 0, if DAD succeeded i.e. no replies are received. Always use following syntax for duplicate address detection with arping:
$ sudo arping -D -I <interface-name> -c 2 <IP-ADDRESS-TO-TEST>
$ echo $?

Understanding Bash fork() bomb ~ :(){ :|:& };:

This is horrible code for any Unix / Linux box. It is often used by sys admin to test user processes limitations (Linux process limits can be configured via /etc/security/limits.conf and PAM).

Once a successful fork bomb has been activated in a system it may not be possible to resume normal operation without rebooting, as the only solution to a fork bomb is to destroy all instances of it.

Here is more human readable code:

bomb() {
bomb | bomb &
}; bomb

Properly configured Linux / UNIX box should not go down when fork() bomb sets off, which can be prevented by limiting user processes. This is important for running a stable system. Just add user name or group or all users to /etc/security/limits.conf file and impose process limitations.

# This will prevent a "fork bomb":
vivek hard nproc 300
@student hard nproc 50
@faculty soft nproc 100
@pusers hard nproc 200

Above will prevent anyone in the student group from having more than 50 processes, faculty and pusers group limit is set to 100 and 200. Vivek can create only 300 process. Please note that KDE and Gnome can launch many process.

Calculating Subnets with ipcalc and sipcalc Utilities

Supported features include:

=> Multiple address and netmask input formats.
=> Retrieving of address information from interfaces.
=> Classfull and CIDR output.
=> Multiple address and netmask output formats (dotted quad, hex, number of bits).
=> Output of broadcast address, network class, Cisco wildcard, hosts/range, network range.
=> Output of multiple types of bitmaps.
=> Output of a user-defined number of extra networks.
=> Multiple networks input from commandline.
=> DNS resolutions of hostnames.
=> Compressed and expanded input addresses.
=> Compressed and expanded output.
=> Standard IPv6 network output.
=> v4 in v6 output.
=> Reverse dns address generation.
=> The ability to "split" a network based on a smaller netmask, also with recursive runs on the generated subnets.

[a] ipcalc takes an IP and netmask and calculates the resulting broadcast, network, Cisco wildcard mask, and host range. By giving a second netmask, you can design sub- and supernetworks. It is also intended to be a teaching tool and presents the results as easy to understand binary values.

[b] sipcalc is also an ip subnet calculator consisting of two parts. A plain text based console version, and web (cgi) based counterpart. Sipcalc, in it’s simplest form takes an IP and a subnet mask on the command line and outputs info about the subnet. It supports IP v4/v6 address.

$ ipcalc

Address:          11000000.10101000.00000000. 00000001
Netmask: = 24 11111111.11111111.11111111. 00000000
Wildcard: 00000000.00000000.00000000. 11111111
Network: 11000000.10101000.00000000. 00000000
HostMin: 11000000.10101000.00000000. 00000001
HostMax: 11000000.10101000.00000000. 11111110
Broadcast: 11000000.10101000.00000000. 11111111
Hosts/Net: 254 Class C, Private Internet

$ sipcalc

-[ipv4 :] - 0

Host address -
Host address (decimal) - 3232235521
Host address (hex) - C0A80001
Network address -
Network mask -
Network mask (bits) - 24
Network mask (hex) - FFFFFF00
Broadcast address -
Cisco wildcard -
Addresses in network - 256
Network range - -
Usable range - -

$ sipcalc

-[ipv4 :] - 0

Host address -
Host address (decimal) - 3232235521
Host address (hex) - C0A80001
Network address -
Network mask -
Network mask (bits) - 17
Network mask (hex) - FFFF8000
Broadcast address -
Cisco wildcard -
Addresses in network - 32768
Network range - -
Usable range - -

-[ipv4 :] - 0

Host address -
Host address (decimal) - 4294950912
Host address (hex) - FFFFC000
Network address -
Network mask -
Network mask (bits) - 32
Network mask (hex) - FFFFFFFF
Broadcast address -
Cisco wildcard -
Addresses in network - 1
Network range - -

#Display results as HTML

$ ipcalc -h

$ man ipcalc
$ man sipcalc

Find the memory used by a pprocess using pmap command

By looking into /proc or using ps or top, you must calculate all memory usage yourself i.e. add Shared Memory + mapped file + total virtual memory size of the process + Resident Set Size + non-swapped physical memory used by process.

pmap reports the memory map of a process or processes. To display process mappings:
$ pmap pid

-x can be used to provide info about the memory allocation and mapping types per mapping. The amount of resident, non-shared anonymous, and locked memory.

Testing Connectivity with fping

You can read list of targets (hosts / servers) from a file. The -f option can only be used by the root user. Regular users should pipe in the file via I/O redirectors (stdin). For example:
$ fping < ~/.ping.conf

You can also netmask to ping the entire network i.e generate a target list from a supplied IP netmask. For example, ping the class C 192.168.1.x:
$ fping -g # or
$ fping -g

# Send email if host is down

HOSTS="router sun printer laptop sony-laptop xbox backup-server"
for h in $HOSTS
fping -u $h >& /dev/null
if [ $? -ne 0 ]; then
echo ${h} host is down send email
# mail -s "Host ${h} down"

# Perform an action only on hosts that are currently reachable.

$myHosts = ‘cat /etc/hosts.backup | fping -a‘;
foreach $host (split(/\n/,$myHosts)) {
# take action or call other function

Saturday, February 23, 2008

Disk encryption easily cracked, researchers find

The disk encryption technology used to secure the data in your Windows, Apple and Linux laptops can be easily circumvented, according to new research out of Princeton University.

The flaw in this approach, the researchers say, is that data previously thought to disappear immediately from dynamic RAM (DRAM) actually takes its time to dissolve, leaving the data on the computer vulnerable to thievery regardless of whether the laptop is on or off. That's because the disk encryption key, unlocked via a password when you log on to your computer, then is held in DRAM. If a thief can get a hold of the key, he can then get into the disk.

"We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux," writes Ed Felten, a Princeton professor, on his blog, Freedom to Tinker.

The researchers, which also included participants from the Electronic Frontier Foundation and Wind River Systems, have created a captivating video demonstrating a process (one using a program dubbed "Bit-unLocker") that can be used to snatch the data. In the video, the narrator explains that it takes seconds for data to fade and that the process can be slowed by cooling the memory chips (they chill the memory chips to around -58 F with a liquid spray and remove them without affecting the contents). The chips can even be switched to a different computer to read them. Liquid nitrogen can be used to cool the chips for hours, the researchers say.

"This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM," Felten writes.

Felten adds that even using Trusted Computing hardware doesn't help.

(A presentation from a pair of security researchers scheduled for Black Hat USA last summer that promised to undermine chip-based desktop and laptop security was suddenly withdrawn without explanation. The briefing promised to show how computer security based on trusted platform module hardware could be circumvented.)

The Princeton findings prompted Steven Sprague, CEO of Wave Systems, which makes management software for hardware security devices, to point out that such attacks on laptops would be preventable via hardware-based encryption offerings.

"The advantage of hardware-based encryption is that all the encryption, key management and access control all happen inside the chip so there is no software risk to reverse engineer the encryption silicon," Sprague said. The encryption key never leaves the hardware-based encryption disk in this case, he said.

Members of the mailing list, which daily documents data breaches, buzzed about the findings, with some suggesting the research shows the need for multifactor authentication or partial keys stored in separate places.

U.S. states have enacted a series of tough data disclosure laws over the past five years which force companies to notify residents whenever they lose sensitive information. Under these laws, a missing laptop can cost a company millions of dollars as well as public embarrassment as it is forced to track down and notify those whose data was lost.

However, many state laws, such as California's SB 1386 make an exception for encrypted PCs. So if a company or government agency loses an encrypted laptop containing sensitive data, they are not compelled to notify those affected.
The team's research may spur legislators to rethink that approach, according to Alex Halderman, a Princeton graduate student who worked on the paper. "Maybe that law is placing too much faith in disk encryption technologies," he said. "It may be that we're not hearing about thefts of encrypted machines where that data could still be at risk."

Robert McMillan, IDG News Service, contributed to this report.

Time to say Good-bye to Netscape?

Stony Stevenson writes to point out that Netscape has finally reached end of line with the release of version A pop-up will offer users the choice of switching to Firefox, Flock, or remaining with the dead browser, but no new updates will be released. "Nearly 14 years after the once mighty browser made its first desktop appearance as Mosaic Netscape 0.9, its disappearance comes as little surprise. Although Netscape accounted for more than 80 per cent of the browser market in 1995, the arrival of Microsoft's Internet Explorer in the same year brought stiff competition and surpassed Netscape within three years."

A new era of voyage has come?

NASA has made a recent award of 171 million dollars to Orbital Sciences Corp. of Virginia in order to aid the company in developing a feasible space cargo delivery service. 'The US space agency intends to hold an open competition in the years ahead for actual space station cargo-delivery contracts, but Orbital of Dulles, VA, is one of two companies receiving financial help from NASA to develop their proposed systems. The other is Space Exploration Technologies of El Segundo, CA.