linux poison RSS
linux poison Email
0

Quick and dirty configuration of NFS

NFS Consists of the following:

/etc/exports –> /etc/exports contains all the NFS shares

/usr/sbin/exportfs -r
#exportfs -r is used to synchronize nfsd in memory with the /etc/exports file
#Use exportfs -v to see which shares nfsd is currently exporting

/etc/rc.d/init.d/nfslock - which has 2 parts
/sbin/rpc.lockd
/sbin/rpc.statd

/etc/rc.d/init.d/nfs - which has 3 parts
/usr/sbin/rpc.rquotad
/usr/sbin/rpc.mountd
/usr/sbin/rpc.nfsd

At bare minimum you need to have portmap (or portmapper), mountd (or rpc.mountd), and nfsd (or rpc.nfsd) running; otherwise NFS isn’t running.

#Sample nfs /etc/exports file:
/home/ftp/pub (ro,insecure,all_squash)
/home/ftp/pub adminsvr(rw,insecure,all_squash)

#Above we have two entries, one for everyone, and one
for the adminsvr machine.
Read more
0

What is Segmentation fault

One of the most common problems when making software is errors like “Segmentation fault“, also called SegFault. Here is what a SegFault is.

Virtual memory in a computer can be created in 2 ways: pages or segments. Paging means that the memory is divided in pages of equal size, containing words of memory in it. Segmentation means that every process has a segment of memory of needed size, with gaps of empty memory blocks between the segments.

The operating system knows the upper limit of every segment, and every segment begins at a virtual address. When a program accesses a memory block, it calls a virtual address that the Memory Management Unit (MMU) maps to a real address. If the operating system sees that the requested address doesn’t match any valid address in the segment, it will send a signal to the process terminating it. SegFaults are the direct result of a memory error. The program has a bad pointer, a memory leak or any kind of error that makes it access the wrong memory address.
Read more
0

Allow normal users to mount drives

By default, Linux does not allow users to mount drives. Only root can do this

With a special command in the /etc/fstab file, you can change that. This is a typical line for the /dev/hda2 drive in /etc/fstab:

/dev/hda2 /mnt ext3 noauto,user 1 1

The keyword user allows any user to mount the drive into /mnt.
Read more
0

USB Modem Configuration

Follow the steps mentioned below to configure internet connection using USB modem

1) Insert your usb modem and observe your /var/log/messages log file, you should see something like

Dec 11 16:44:56 poison kernel: ohci_hcd 0000:00:13.0: wakeup
Dec 11 16:44:56 poison kernel: usb 1-3: new full speed USB device using ohci_hcd and address 2
Dec 11 16:44:56 poison kernel: usb 1-3: new device found, idVendor=1004, idProduct=6000
Dec 11 16:44:56 poison kernel: usb 1-3: new device strings: Mfr=1, Product=2, SerialNumber=3
Dec 11 16:44:56 poison kernel: usb 1-3: Product: Qualcomm CDMA Technologies MSM
Dec 11 16:44:56 poison kernel: usb 1-3: Manufacturer: LG CDMA USB Modem
Dec 11 16:44:56 poison kernel: usb 1-3: SerialNumber: Serial Number
Dec 11 16:44:56 poison kernel: usb 1-3: configuration #1 chosen from 1 choice
Dec 11 16:44:57 poison kernel: cdc_acm 1-3:1.0: ttyACM0: USB ACM device
Dec 11 16:44:57 poison kernel: usbcore: registered new driver cdc_acm
Dec 11 16:44:57 poison kernel: drivers/usb/class/cdc-acm.c: v0.25:USB Abstract Control Model driver for USB modems and ISDN adapters

Here we are interested in knowing the port on which modem is connected, which is ttyACM0

2) Copy and past the following lines into your /etc/wvdial.conf

[Dialer Defaults]
Modem = /dev/ttyACM0
Baud = 57600
Init1 = AT+CRM=1
Phone = 777
Username = internet
Password = internet
Stupid Mode = 1
Compuserve = 0
Idle Seconds = 30

3) Run the following command to update any missing parameter in /etc/wvdial.conf file

poison:/etc # wvdialconf update

4) Run the following command to start the internet connection

poison:/etc # wvdial

I have tested this on Ferdora and Suse Linux and I believe that it should also work for you all.
Read more
2

HowTO- Round Robin DNS

In this case we will take an example of a webservers

To use round robin, each web server must have its own public IP address. A common scenario is to use network address translation and port forwarding at the firewall to assign each web server a public IP address while internally using a private address.

This example from the DNS zone definition for foo.com assigns the same name to each of the three web servers, but uses different IP addresses for each:

;
; Domain database for foo.com
;
foo.com. IN SOA ns1.foo.com. hostmaster.foo.com. (
2006032801 ; serial
10800 ; refresh
3600 ; retry
86400 ; expire
86400 ; default_ttl
)
;
; Name servers
;
foo.com. IN NS ns1.foo.com.
foo.com. IN NS ns2.foo.com.
;
; Web servers
; (private IPs shown, but public IPs are required)
;
www IN A 10.1.1.11
www IN A 10.1.1.12
www IN A 10.1.1.13

When DNS gets a request to resolve the name www.foo.com, it will return one IP address, then a different address for the next request and so on. Theoretically, each web server will get one third of the web traffic. Due to DNS caching and because some requests may use more resources that others, the load will not be shared equally. However, over time it will come close.

This is a very good example of Load balancing using DNS server (Round Robin)
Read more
0

UNIX history in form of picture

Read more
0

HowTo Identify your Modem chipset

# Download http://easylinux.info/uploads/scanModem.gz

# gunzip -c scanModem.gz > scanModem
# chmod +x scanModem
# cp scanModem /usr/bin/To identify Modem chipset
# scanModem# gedit Modem/ModemData.txt



Read more
0

Quick Log server configuration

On remote server: edit /etc/sysconfig/syslog on the server and set
SYSLOGD_PARAMS=”-r”

The -r option allows remote machine to log to syslog.
On the client machine, edit /etc/syslog.conf and add the line(s)

*.* @server

Don’t forget to restart the syslog on both machines after the above editing.

Read more
2

They are Computer GODS

1) Dennis Ritchie and Ken Thompson - Dennis was the original developer of C and one of the core developers on UNIX while Ken was the man responsible for UNIX

2) Bjarne Stroustrup - This guy brought us C++! I couldn't talk trash about a guy who made C++.



















3) Alan Kay - This is one of the fathers of object oriented programming.



















4) John McCarthy - The original designer of the Lisp programming language.



















5) Richard Stallman - Like him or hate him, he is a seriously influential person in the computer world, The man is the founder of GNU.



















6) Larry Wall - This guy brought us Perl.



















7) Alan Cox - Alan Cox and Richard Stallman must have been long lost brothers. This guy was one of the earliest developers on the linux kernel and apparently has not shaved since he started.










8) James Gosling - This guy brought us Java and is wearing a shirt with the java mascot playing an electric guitar.














9) Guido Van Rossum - This guy brought us Python














10) Linus Benedict Torvalds: Best known for initiating the development of the Linux kernel.



















11) Dr. Andrew Stuart "Andy" Tanenbaum : He is best known as the author of MINIX, a free Unix-like operating system for teaching purposes, and for his computer science textbooks, regarded as standard texts in the field. He regards his teaching job as his most important work.










Am I missing anyone here?
Read more
26

All 235 low-cost webcams supported in Linux thanks to... this man


LONE HOBBYIST programmer sitting at his home in France is responsible for adding 235 USB webcams to the list of those supported by Linux.

His name is
Michel Xhaard

Check his site : here
List of supported webcams : here




Read more
0

HowTo Convert CHM files to PDF in Linux

1. Download chm2pdf by clicking [Here].

3. Now extract the downloaded file.

4. Open your Terminal and browse to the folder you extracted and run:

5. sudo python setup.py install

Now to use it is very simple:

In Terminal, type: chm2pdf filename.chm

Dependencies

You need these packages installed and working:

* chmlib
* pychm
* htmldoc

Read more
3

HowTo do Transparent proxy with Squid

Modify or add following to squid configuration file (/etc/squid/squid.conf):

httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
acl lan src 192.168.1.1 192.168.2.0/24
http_access allow localhost
http_access allow lan


Added following rules to forward all http requests (coming to port 80) to the Squid server port 3128 :

[eth0 connected to internet and eth1 connected to local lan]

iptables -t nat -A PREROUTING -i eth1 -p tcp –-dport 80 -j DNAT –to 192.168.1.1:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp –-dport 80 -j REDIRECT –-to-port 3128

Read more
2

How To Apply Patch to Kernel Source

Go to directory which holds the kernel source code

# cd /usr/src/linux
# bzip2 -dc /usr/src/patch-x.y.z.bz2 patch -p1 –dry-run

–dry-run option to check that the patch applies cleanly. It can be a real pain to pull out a partially-applied patch. The -p1 option strips off part of the diff file’s pathnames for each changed file (see the patch(1) manpage for more details).

Now you’ve checked that it should apply cleanly, run the following to actually apply it. Then you’re done!

# bzip2 -dc /usr/src/patch-x.y.z.bz2 patch -p1
Read more
0

Make fetchmail deliver via procmail

Just add the following line:

mda “/usr/bin/procmail -d %T”

to your “options” in ~/.fetchmailrc For instance .fetchmailrc may look something like:

poll mail.domain.com protocol pop3 username myusername password mysecretpassword options ssl mda “/usr/bin/procmail -d %T”
Read more
0

Filter attachments (.bat, .exe, etc..) in postfix

Postfix attempts to be fast, easy to administer, and secure. The outside has a definite Sendmail-ish flavor, but the inside is completely different. Postfix has several hundred configuration parameters that are controlled via the main.cf file. Fortunately, all parameters have sensible default values

Simple, In /etc/postfix/main.cf enable the body checks with this line:
body_checks = pcre:/etc/postfix/body_checks
Now put something in this file (/etc/postfix/body_checks):
/^(.*)name=\”(.*).(hta|vb[esx]|ws[fh]|js[e]|bat|cmd)\”$/ REJECT
This will filter attachments of various types.
Remove from the above line, whatever you want to allow.Of course you can add some more lines to make postfix a simple spam filter:

/special offer email/ REJECT
/mortgage rates/ REJECT
/other spam mail subjects/ REJECT


Read more
0

Add Microsoft True Type Fonts into Linux

If you are a regular user of windows and have switched to linux recently, then sure you are going to miss some important things from your windows machine. One such things is those beautiful fonts which you have used in windows.

Method 1:
For installing these fonts there various methods available one among them is as below,

* Install a Cab Extract Utility for Linux. Get it from http://www.cabextract.org.uk/
* Download the Latest msttcorefonts spec from http://corefonts.sourceforge.net/
* If you have RPM Build environment in your home directory, then go ahead with the command. rpmbuild -bb msttcorefonts-2.0-1.spec
* Install the newly built rpm using the following command rpm -ivh <>
* Once done, log off & log in or restart the xfs service using the command /sbin/service xfs reload

Method 2:
use RPM method,

* rpm -ivh msttcorefonts-2.0-1.noarch.rpm

You should be logged in as a super user or use sudo


Read more
0

HowTo Add PATH

To add or remove a directory in your path, use a text editor to change the shell variable `PATH’ in the `.bashrc’ file in your home directory

For example, suppose the line that defines the `PATH’ variable in your `.bashrc’ file looks like this:

PATH="/usr/bin:/bin:/usr/bin/X11:/usr/games"

You can add the directory `/home/nikesh/bin’ to this path, by editing this line like so:

PATH="/usr/bin:/bin:/usr/bin/X11:/usr/games:/home/nikesh/bin"
Read more
0

HowTo chang the file modification time

Use touch to change a file’s timestamp without modifying its contents. Give the name of the file to be changed as an argument. The default action is to change the timestamp to the current time.

* To change the timestamp of file `nikesh’ to the current date and time, type:

$ touch nikesh

To specify a timestamp other than the current system time, use the `-d’ option, followed by the date and time that should be used enclosed in quote characters. You can specify just the date, just the time, or both.

* To change the timestamp of file `nikesh’ to `17 May 2006 14:16′, type:

$ touch -d ‘17 May 2006 14:16′ nikesh

* To change the timestamp of file `nikesh’ to `14 May’, type:

$ touch -d ‘14 May’ nikesh

* To change the timestamp of file `nikesh’ to `14:16′, type:

$ touch -d ‘14:16′ nikesh
Read more
0

What are TCP control bits

There are six ‘control bits’ defined in TCP, one or more of which is defined in each packet. The control bits are ‘SYN’, ‘ACK’, ‘PSH’, ‘URG’, ‘RST’, and ‘FIN’. TCP uses these bits to define the purpose and contents of a packet.

SYN bit is used in establishing a TCP connection to synchronize the sequence numbers between both endpoints.

ACK bit is used to acknowledge the remote host’s sequence numbers, declaring that the information in the acknowledgment field is valid.

PSH flag is set on the sending side, and tells the TCP stack to flush all buffers and send any outstanding data up to and including the data that had the PSH flag set. When the receiving TCP sees the PSH flag, it too must flush its buffers and pass the information up to the application.

URG bit indicates that the urgent pointer field has a valid pointer to data that should be treated urgently and be transmitted before non-urgent data.

RST bit tells the receiving TCP stack to immediately abort the connection.

FIN bit is used to indicate that the client will send no more data (but will continue to listen for data).
Read more
14

Temporarily suspend a process

At times, you may find it necessary to temporarily suspend a process, and then resume its execution at a later time. The following two commands will suspend a process, and the resume it, respectively:

# kill -STOP 945
# kill -CONT 945
Read more
0

How to add a module to an existing kernel

Sometime you need to add a new driver to a modular kernel, you can just compile the needed module and install it without recompiling the entire kernel. Just follow these steps:

# cd /usr/src/linux (location where linux kernel source is stored)
# make config Or make menuconfig OR make xconfig
(Choose the driver as a module)

# make dep
# make modules
# make modules_install
# depmod -a

You should now be able to use the new module.
Read more
0

How to do Virtual hosting in Vsftpd

There are two ways of doing this ...

1. If you integrate vsftpd with xinetd, you can use xinetd to bind to several different IP addresses. For each IP address, get xinetd to launch vsftpd with a different config file. This way, you can get different behavior per virtual address.

2. Alternatively, run as many copies as vsftpd as necessary, in standalone
mode. Use “listen_address=x.x.x.x” to set the virtual IP.
Read more
0

Prevent the reuse of old passwords

The PAM module pam_unix.so can be configured to maintain a list of old passwords for every user prohibiting the reuse of old passwords.

The list is located in the /etc/security/opasswd file. This is not a plain text file, but it should be protected the same as the /etc/shadow file. This is normally referred to as password history.

To remember the last 5 passwords, add the line below to the /etc/pam.d/system-auth file:

password sufficient /lib/security/pam_unix.so use_authtok md5 shadow remember=5

Read more
0

Install multimedia Support in Fedora 8

Follow these instructions to get mp3 and other multimedia support on your Fedora Core 8.

Open a terminal and become root, then run this command:

# wget http://livna-dl.reloumirrors.net/fedora/8/i386/livna-release-8-1.noarch.rpm
# rpm -ivh livna-release-8-1.noarch.rpm

Install all other plug ins..

# yum -y install gstreamer-plugins-bad gstreamer-plugins-ugly xine-lib-extras-nonfree
Read more
0

How to Encrypt - Decrypt your files

To Encrypt a normal file:
$ openssl des3 -salt -in myfile.txt -out outputfile.des3
[This will ask for password, give any pass that you can remember]

To Decrypt:
$ openssl des3 -d -salt -in outputfile.des3 -out myfile.txt -k password
Read more
0

Recover MySQL database root password

You can recover MySQL database server password with following five easy steps.

Step # 1: Stop the MySQL server process.

Step # 2: Start the MySQL (mysqld) server/daemon process with the –skip-grant-tables option so that it will not prompt for password

Step # 3: Connect to mysql server as the root user

Step # 4: Setup new root password

Step # 5: Exit and restart MySQL server

Here are commands you need to type for each step (login as the root user):

Step # 1 : Stop mysql service

# /etc/init.d/mysql stop

Stopping MySQL database server: mysqld.

Step # 2: Start to MySQL server w/o password

# mysqld_safe –skip-grant-tables &

[1] 5988

Starting mysqld daemon with databases from /var/lib/mysql

mysqld_safe[6025]: started

Step # 3: Connect to mysql server using mysql client

# mysql -u root

Output:Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 1 to server version: 4.1.15-Debian_1-logType ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the buffer.

mysql>

Step # 4: Setup new MySQL root user password

mysql> use mysql;

mysql> update user set password=PASSWORD(”NEW-ROOT-PASSWORD“) where User=’root’;

mysql> flush privileges;

mysql> quit

Step # 5: Stop MySQL Server:

# /etc/init.d/mysql stop

Stopping MySQL database server: mysqld STOPPING server from pid file /var/run/mysqld/mysqld.pid
mysqld_safe[6186]: ended
.
[1]+ Done mysqld_safe –skip-grant-tables

Step # 6: Start MySQL server and test it

# /etc/init.d/mysql start

# mysql -u root -p
Read more
0

BackUp and Restore MBR after Windows messes it up

Just another note about restoring the boot loader for dual boot systems, after Windows messes it up. In Linux, the “dd” command can read and write to/from raw disks and files. If you have a floppy drive, creating a boot disk is as simple as putting a floppy in the drive and typing this:

[You need to use "root" account to do following]

# dd if=/dev/hda of=/dev/fd0 bs=512 count=1

This makes an exact copy of the MBR of the first hard drive (hda - you need to replace this), copying it to a floppy disk. You can boot directly from this floppy, and see your old boot menu. You can restore it by switching the “if=” and “of=” (input file, output file) parameters.

If you don’t have a floppy drive, you can back it up to a file with this:

# dd if=/dev/hda of=/home/nik/boot.mbr bs=512 count=1

Then you can boot into a CD-ROM distribution such as Knoppix, or often use your Linux distribution’s installation CD to boot into rescue mode and restore it with:

# dd if=/mnt/hda5/nik/boot.mbr of=/dev/hda bs=512 count=1

(you’ll need to find and mount the partition containing the directory where you backed up the MBR).
Read more
0

How to use rsysnc to keep files in Sync Between Servers

Suppose remote computer is “192.168.1.171″ and has the account “don”. You want
to “keep in sync” the files under “/home/Logs” on the remote computer with files on “/home/nikesh/Server” on the local computer.

$ rsync -Lae ssh don@192.168.1.171:/home/Logs /home/nikesh/Server

“rsync” is a convient command for keeping files in sync, and as shown above, it will work
through ssh. The -L option tells rsync to treat symbolic links like ordinary files.

Also see [http://www.rsnapshot.org/]
Read more
0

Sync Samba and Unix password

The pam_smbpass PAM module can be used to sync users’ Samba passwords with their system passwords. If a user invokes the passwd command, the password he uses to log in to the system as well as the password he must provide to connect to a Samba share are changed.

To enable this feature, add the following line to /etc/pam.d/system-auth below the pam_cracklib.so invocation:

password required /lib/security/pam_smbpass.so nullok use_authtok try_first_pass

Read more
0

Quick Apache configuration tips

Enable Directory Browsing
Options +Indexes

## block a few types of files from showing
IndexIgnore *.wmv *.mp4 *.avi

Disable Directory Browsing
Options All -Indexes

Customize Error Messages
ErrorDocument 403 /forbidden.html
ErrorDocument 404 /notfound.html
ErrorDocument 500 /servererror.html

Get SSI working with HTML/SHTML
AddType text/html .html
AddType text/html .shtml
AddHandler server-parsed .html
AddHandler server-parsed .shtml
# AddHandler server-parsed .htm

Change Default Page (order is followed!)
DirectoryIndex myhome.htm index.htm index.php

Block Users from accessing the site
order deny,allow
deny from 202.54.122.33
deny from 8.70.44.53
deny from .spammers.com
allow from all

Allow only LAN users
order deny,allow
deny from all
allow from 192.168.0.0/24

Redirect Visitors to New Page/Directory
Redirect oldpage.html http://www.domainname.com/newpage.html
Redirect /olddir http://www.domainname.com/newdir/

Block site from specific referrers
RewriteEngine on
RewriteCond %{HTTP_REFERER} site-to-block\.com [NC]
RewriteCond %{HTTP_REFERER} site-to-block-2\.com [NC]
RewriteRule .* - [F]

Block Hot Linking/Bandwidth hogging
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$RewriteCond %{HTTP_REFERER} !^http://(www\.)?mydomain.com/.*$ [NC]
RewriteRule \.(gif|jpg)$ - [F]

Want to show a “Stealing is Bad” message too?
Add this below the Hot Link Blocking code:
RewriteRule \.(gif|jpg)$ http://www.mydomain.com/dontsteal.gif [R,L]

Stop .htaccess (or any other file) from being viewed
order allow,deny
deny from all

Avoid the 500 Error
# Avoid 500 error by passing charset
AddDefaultCharset utf-8

Grant CGI Access in a directory
Options +ExecCGI
AddHandler cgi-script cgi pl# To enable all scripts in a directory use the following#
SetHandler cgi-script

Change Script Extensions
AddType application/x-httpd-php .gne
gne will now be treated as PHP files! Similarly, x-httpd-cgi for CGI files, etc.

Use MD5 Digests
Performance may take a hit but if thats not a problem, this is a nice option to turn on.
ContentDigest On

Save Bandwidth
# Only if you use PHP
php_value zlib.output_compression 16386

Turn off magic_quotes_gpc
# Only if you use PHP
php_flag magic_quotes_gpc off


Read more
1

How To Make ISO image from CD

Use following command to create ISO image of CD
Note: /dev/hdc is my cdrom device

1. dd if=/dev/hdc of=/home/nikesh/example.iso bs=2048 conv=notrunc
— OR –

2. cat /dev/hdc > /home/nikesh/example.iso
Both commands do exactly the same, but the first one might be easier to remember.
Read more
1

How to recover damaged Superblock

If a filesystem check fails and returns the error message “Damaged Superblock”

Solution:

There are backups of the superblock located on several positions and we can restore them with a simple command. Backup locations are: 8193, 32768, 98304, 163840, 229376 and 294912. ( 8193 in many cases only on older systems, 32768 is the most current position for the first backup )

Now, suppose you get a ¨Damaged Superblock¨ error message at filesystem check ( after a power failure ) and you get a root-prompt in a recovery console, then you give the command:

# e2fsck -b 32768 /dev/hda5

System will then check the filesystem with the information stored in that backup superblock and if the check was successful it will restore the backup to position 0.

If this is not working try using the other copy of Superblock located at the above mention location in your HD.

Read more
0

How to crash Linux?

As root, you can do whatever you want.
Try this command, as root (reconsider if you really want to crash):
# cp /dev/zero /dev/men
As root, you can even erase all the files on your system with a similarly innocuously looking one-liner (don’t do it):
# rm -rf /

This is not to say that Linux is easy to crash, but that the system administrator (”root”) has the complete power over the system so think before when working on Linux as “root” user.
Read more
0

Create Linux Filesystem From An Ordinary File

Under Linux, you can create a regular file, format it as an ext2, ext3, or reiser file system, and then mount it just like a physical drive. It's then possible to read and write files to this newly-mounted device. You can also copy the complete file system, since it is just a file, to another computer.

First, you want to create a 20MB file or any size you want by executing the following command:

     $ dd if=/dev/zero of=disk-image count=40960

     40960+0 records in
     40960+0 records out

Next, to format this as an ext3 filesystem, you just execute the following command:

     $ /sbin/mkfs -t ext3 -q disk-image
     mke2fs 1.32 (09-Nov-2002)
     disk-image is not a block special device.
     Proceed anyway? (y,n) y

You are asked whether to proceed because this is a file, and not a block device. That is OK.

Next, you need to create a directory that will serve as a mount point for the loopback device.

      $ mkdir fs

You must do the next command as root, or with an account that has superuser privileges.

      # mount -o loop=/dev/loop0 disk-image fs

You can now create new files, write to them, read them, and do everything you normally would do on a disk drive. To make normal user to use this filesystem you need to give valid permission to the directory holding this filesystem.


Read more
0

Allow normal user to mount cdrom

By default, this device is not accessible by all of your users. To allow users to mount the CDROM drive, login as root and execute the following command

# chmod a+r /dev/cdrom

This will allow any user on your Linux system to mount the CDROM drive from their console or desktop.
Read more
0

Mounting an ISO Image as a Filesystem

This is great, if you don’t have the DVD hardware, but need to get the data.
The following show an example of mounting the Fedora core 7 DVD as a file.

# mkdir /iso
# mount -o loop -t iso9660 /FC7-i386-DVD.iso /iso


Or to mount automatically at boot, add the following to “/etc/fstab”

/FC7-i386-DVD.iso /iso iso9660 ro,loop 0 0
Read more
1

How to Use MD5

Using an MD5 checksum you can do exactly that- verify the integrity of data. This can be used in a number of different situations and in any number of different ways, but it is a simple and effective way to verify large amounts of data.

Message-Digest algorithm 5 is a cryptographic hash function with a 128-bit value which can be found all over, especially on the internet. A checksum is a kind of redundancy check which can verify the integrity of data in a number of ways. The most basic form of checksum will verify the size of a set amount of data, assuming that if the data has the correct number of bytes it was transferred without a problem. Using MD5, a unique string of letters and numbers can be put together so signify the data which is in question. Here is a sample string:
ecd4cb123cd3099f9c3e56f948b65375
The goal of this would be to identify data which needs to be backed up, and then create a MD5 checksum. With this done the data can be copied into place and the MD5 checksum can be reviewed so as to verify the data was copied without incident.

How to use MD5 in Linux?
With any Linux distribution checking an MD5 checksum is easy and quick. No installations or add-ons should be necessary.

Generate a MD5 checksum:
open the console and type following command to generate the md5 checksum
md5sum xxxxxx.iso > xxxxxx.iso.md5
(note: any file extension can be used)

Verify a MD5 checksum:
open the console and type following command to check the md5 checksum
md5sum -c xxxxxx.iso.md5 
(this is supposing the MD5 and the file being verified are in the same directory)

That is really all there is to it! In just a few minutes you can be an MD5 expert and will have taken control of this powerful tool. MD5 checksums are very useful for the verification of data and for passwords, but it should be noted that tools are available that can decompile MD5 sums so that they are not always a perfectly secure way to store a password. Even so, they are a very useful tool for data redundancy, protection, and recovery.



Read more
2

Repair a Corrupt MBR and boot into Linux (fedora)

There are times when you inadvertently overwrite your Master Boot Record. The end result being that you are unable to boot into Linux. This is especially true when you are dual booting between windows and Linux OSes. Once when I was working in Windows XP, I accidentally clicked the hibernate button instead of shutdown. And windows somehow overwrote my MBR which housed the GRUB boot loader. At such times, it pays to have this cool tip at hand.

This is what you do to restore the GRUB boot loader when faced with the above problem. First you need a Linux distribution CD. If you are using Fedora (RedHat) then the first CD is sufficient. But you may also use any of the live CDs like Knoppix, Ubuntu Live CD and so on.

With Fedora CD
Boot your computer with the first CD of Fedora in your CD drive (You have to enable your PC to boot from the cdrom, which you can set in the BIOS settings). At the installation boot prompt that you get, enter the following command:

boot: linux rescue

and press Enter. The installer will ask you a few questions like the language you would like to use, the type of keyboard etc. Then, if you have linux previously installed on your machine, the Fedora installer will automatically detect it and mount it in the /mnt/sysimage directory. Once the linux partition is mounted, you are dropped into the command shell prompt. The next step is to make your newly mounted directory the root (or parent) directory. This you do by running the chroot command as follows:

# chroot /mnt/sysimage

Now you are in the shell with respect to the parent directory which is the linux partition on your harddisk.
From here, the steps needed depends on which bootloader you are using. You have to have a fair idea what is the device node of your harddisk partition housing your MBR. In most cases, it is /dev/hda if you have an IDE harddisk. But if you have a SCSI harddisk, it will be /dev/sda.

Restoring GRUB
Execute the following command :
# grub-install /dev/hda

to install GRUB boot loader on to your MBR. And then type exit to reboot the machine. Now your GRUB boot loader is fixed.


Read more
0

Creating the smbpasswd file from /etc/passwd file

Ok, to create the /etc/smbpasswd file: run the following command:

# cat /etc/passwd | mksmbpasswd.sh >/etc/smbpasswd
- Next, fix the permissions of the file:

# chmod 500 /etc/smbpasswd
With this command, all users defined in the /etc/passwd file will have a SMB entry put into the /etc/smbpasswd file. Please note that if desired, users can log in via a different SMB username/passwd than their Unix username/password. Please be aware that though the user is now defined in the smbpasswd file, the user will be LOCKED out until they actually CHANGE their SMB password. To do this, run the following command PER user:

# smbpasswd nikesh




Read more
13

HowTo Create a self-signed SSL Certificate for Apache

Step 1: Generate a Private Key
The openssl toolkit is used to generate an RSA Private Key and CSR (Certificate Signing Request). It can also be used to generate self-signed certificates which can be used for testing purposes or internal usage.

The first step is to create your RSA Private Key. This key is a 1024 bit RSA key which is encrypted using Triple-DES and stored in a PEM format so that it is readable as ASCII text.

# openssl genrsa -des3 -out server.key 1024

Generating RSA private key, 1024 bit long modulus
…………………………………………………++++++
……..++++++
e is 65537 (0×10001)
Enter PEM pass phrase:
Verifying password - Enter PEM pass phrase:

Step 2: Generate a CSR (Certificate Signing Request)

Once the private key is generated a Certificate Signing Request can be generated. The CSR is then used in one of two ways. Ideally, the CSR will be sent to a Certificate Authority, such as Thawte or Verisign who will verify the identity of the requestor and issue a signed certificate. The second option is to self-sign the CSR, which will be demonstrated in the next section.

# openssl req -new -key server.key -out server.csr

Country Name (2 letter code) [IN]:IN
State or Province Name (full name) [Nikesh Jauhari]:Nikesh
Locality Name (eg, city) [Pune]:Pune
Organization Name (eg, company) [My Company Ltd]:Cybage Software Pvt. Ltd.
Organizational Unit Name (eg, section) []:Information Technology
Common Name (eg, your name or your server’s hostname) []:poison.hell.com
Email Address []:njauhari@cybage.com
Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Step 3: Remove Passphrase from Key

One unfortunate side-effect of the pass-phrased private key is that Apache will ask for the pass-phrase each time the web server is started. Obviously this is not necessarily convenient as someone will not always be around to type in the pass-phrase, such as after a reboot or crash. mod_ssl includes the ability to use an external program in place of the built-in pass-phrase dialog, however, this is not necessarily the most secure option either. It is possible to remove the Triple-DES encryption from the key, thereby no longer needing to type in a pass-phrase. If the private key is no longer encrypted, it is critical that this file only be readable by the root user! If your system is ever compromised and a third party obtains your unencrypted private key, the corresponding certificate will need to be revoked. With that being said, use the following command to remove the pass-phrase from the key:

# cp server.key server.key.org
# openssl rsa -in server.key.org -out server.key


The newly created server.key file has no more passphrase in it.

-rw-r–r– 1 root root 745 Jun 29 12:19 server.csr
-rw-r–r– 1 root root 891 Jun 29 13:22 server.key
-rw-r–r– 1 root root 963 Jun 29 13:22 server.key.org

Step 4: Generating a Self-Signed Certificate

To generate a temporary certificate which is good for 365 days, issue the following command:

# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature ok
………………………..
Getting Private key

Step 5: Installing the Private Key and Certificate

When Apache with mod_ssl is installed, it creates several directories in the Apache config directory. The location of this directory will differ depending on how Apache was compiled.

# cp server.crt /usr/local/apache/conf/ssl.crt
# cp server.key /usr/local/apache/conf/ssl.key


Step 6: Configuring SSL Enabled Virtual Hosts

SSLEngine on
SSLCertificateFile /usr/local/apache/conf/ssl.crt/server.crt
SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/server.key
SetEnvIf User-Agent “.*MSIE.*” nokeepalive ssl-unclean-shutdown
CustomLog logs/ssl_request_log \
“%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \”%r\” %b”

Step 7: Restart Apache and Test

/etc/init.d/httpd stop
/etc/init.d/httpd start


Now you can use https://yourwebservername.dowmain-name.


Read more
0

Virtual Hosting using Apache

If you want to maintain multiple domains/hostnames on your machine you can setup VirtualHost containers for them.
Please see the documentation at URL:http://www.apache.org/docs/vhosts/ for further details before you try to setup virtual hosts.

Sample


FQDN Document Root
Host Name www.hell.com
(192.168.0.1)
/var/www/html/
Virtual Host Name server.hell.com
(192.168.0.1)
/var/www/server/


BIND Setup

  • Add an Alias Name into BIND DB file
    www             IN      A       192.168.0.1
    server IN CNAME www

    Apache Setup

  • httpd.conf
    ...
    NameVirtualHost 192.168.0.1:80
    DocumentRoot /var/www/html/ ServerName _default_

    DocumentRoot /var/www/server/ ServerName server.hell.com
    ...
  • Read more
    2

    Scan vulnerability by using Nessus

    Nessus is an incredible commercial-grade vulnerability scanner also freely available under the Gnu Public License (GPL). Nessus can use Nmap to further probe networks for holes. Nessus can selectively scan for over 675 (and growing) known security problems. The resulting reports are organized by host, categorized by severity, and can be exported in a variety of formats, to include a very slick crosslinked HTML including pie charts. Links to fixes for known security problems are included.
    Installation

    Get the required files from : http://rpm.pbone.net/
    # rpm -ihv nessus-core-2.2.3-3.i586.rpm
    # rpm -ihv nessus-libraries-2.2.3-3.i586.rpm

    Initial Configuration
    Create a certificate
    # nessus-mkcert

    If you do not know how to answer, just press enter.
    Create a user
    # nessus-adduser

    Change the runlevel, and start
    # checkconfig nessusd on
    # /etc/init.d/nessusd start

    Update plugins
    To use up-to-date plugins, you need to register at http://www.nessus.org/register/
    After registering your e-mail, you will receive a message with an id as XXXX-XXXX-XXXX-XXXX-XXXX. You can simple enter like,
    # nessus-fetch --register XXXX-XXXX-XXXX-XXXX-XXXX
    # nessus-update-plugins -v
    Update automatically by crontab
    # crontab -u root -e
    Add following line.
    45 7 * * * /usr/sbin/nessus-update-plugins

    Scanning
    Start nessus
    # nessus &

    Login to nessus. Type login name and password.
    Go to "Plugins" tab and select plugins to perform scanning. For example, press "Enable all" button.
    Go to "Target" tab and enter targets. Type either a host name, a host IP address, or network as "192.168.0.0/24"
    Press "Start the scan" when you are ready. Wait a while

    You will see a report after a couple of moment. So take consideration of the result!


    Read more
    4

    Repair Corrupt RPM Database

    Strange things sometimes happen, one of them is a corrupt rpm database. This means that the computer tells you something is installed and it really is not.
    Here is how to solve this problem.

    First backup and then delete by doing the following command:
    $ su

    # cp /var/lib/rpm/__db.001 /home/nikesh
    # rm /var/lib/rpm/__db.001
    # cp /var/lib/rpm/__db.002 /home/nikesh
    # rm /var/lib/rpm/ __db.002
    # rpm –rebuilddb
    Read more
    0

    Erase the Content of Disk Drive

    After unmounting the disk drive’s partitions, issue the following command (while logged in as root):
    # badblocks -ws
    This will absolutely erase the drive and no tool can retrieve the data after.


    Read more
    4

    Software Testing FAQ

    Q: What is verification?

    A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What is validation?

    A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

    Q: What is a walk-through?

    A: A walk-through is an informal meeting for evaluation or informational purposes. A walk-through is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walk-through is to ensure the code fits the purpose.

    Walk-through also offer opportunities to assess an individual’s or team’s competency.


    Q: What is an inspection?

    A: An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.

    Q: What is quality?

    A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.


    Q: What is good code?

    A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

    Q: What is good design?

    A: Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

    Q: What is software life cycle?

    A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.


    Q: How do you introduce a new software QA process?

    A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and
    testable.

    Q: What is the role of documentation in QA?

    A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.


    Q: Why are there so many software bugs?

    A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

    • There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.
    • Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
    • Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
    • As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
    • Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
    • Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
    • Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
    • Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.

    Q: Give me five common problems that occur during software development.

    A: Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.

    1. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.
    2. The schedule is unrealistic if too much work is crammed in too little time.
    3. Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
    4. It’s extremely common that new features are added after development is underway.
    5. Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

    Q: Do automated testing tools make testing easier?

    A: Yes and no.

    For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile.

    A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret.

    If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change.

    One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts.

    Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

    You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link!


    Q: Give me five solutions to problems that occur during software development.

    A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.

    1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

    2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.

    3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.

    4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.

    Q: Give me five solutions to problems that occur during software development. (Cont’d…)

    5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Do use documentation that is electronic, not paper. Promote teamwork and cooperation.

    Q: What makes a good test engineer?

    A: Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.

    Rob Davis is a good test engineer because he has a “test to break” attitude, takes the point of view of the customer, has a strong desire for quality, has an attention to detail, He’s also tactful and diplomatic and has good a communication skill, both oral and written. And he has previous software development experience, too.


    Q: What is a requirements test matrix?

    A: The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project’s life cycle.

    The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.

    The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality.

    The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.

    The requirements test matrix is a representation of user requirements aligned against system testing.

    Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.

    Q: Give me a requirements test matrix template!

    A: For a simple requirements test matrix template, you want a basic table that you would like to use for cross-referencing purposes.

    How do you create one? You can create a requirements test matrix template in the following six steps:

    Step 1: Find out how many requirements you have.

    Step 2: Find out how many test cases you have.

    Step 3: Based on these numbers, create a basic table. Let’s suppose you have a list of 90 requirements and 360 test cases. Based on these numbers, you want to create a table of 91 rows and 361 columns.

    Step 4: Focus on the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of your table.

    Step 5: Focus on the first row of your table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of your table.

    Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for the sake of this example, test case 64 satisfies requirement 12, then put a large “X” into cell 13-65 of your table… and then you have it; you have just created a requirements test matrix template that you can use for cross-referencing purposes.

    Q: What is reliability testing?

    A: Reliability testing is designing reliability test cases, using accelerated reliability techniques (e.g. step-stress, test/analyze/fix, and continuously increasing stress testing techniques), AND testing units or systems to failure, in order to obtain raw failure time data for product life analysis.

    The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer’s reliability requirements.

    In the system test phase, or after the software is fully developed, one reliability testing technique we use is a test/analyze/fix technique, where we couple reliability testing with the removal of faults.

    Q: What is reliability testing? (Cont’d…)

    When we identify a failure, we send the software back to the developers, for repair. The developers build a new version of the software, and then we do another test iteration. We track failure intensity (e.g. failures per transaction, or failures per hour) in order to guide our test process, and to determine the feasibility of the software release, and to determine whether the software meets the customer’s reliability requirements.

    Q: Give me an example on reliability testing.

    A: For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.

    In this example, the fact that our defibrillator is able to run for 250 hours without any failure, in order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks. We describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our of quantified reliability testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks.

    Then, for example, we use a test/analyze/fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks into dummy resistor loads.

    We track failure intensity (i.e. number of failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our customers’ reliability requirements.

    Q: What is the role of test engineers?

    A: We, test engineers, speed up the work of your development staff, and reduce the risk of your company’s legal liability. We give your company the evidence that the software is correct and operate properly. We also improve your problem tracking and reporting. We maximize the value of your software, and the value of the devices that use it. We also assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool, and before your employees get bogged down. We help the work of your software development staff, so your development team can devote its time to build up your product. We also promote continual improvement. We provide documentation required by FDA, FAA, other regulatory agencies, and your customers. We save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the field. We save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the reputation of your company.

    Q: What is a QA engineer?

    A: We, QA engineers, are test engineers but we do more than just testing. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important. We, QA engineers, are successful if people listen to us, if people use our tests, if people think that we’re useful, and if we’re happy doing our work. I would love to see QA departments staffed with experienced software developers who coach development teams to write better code. But I’ve never seen it. Instead of coaching, we, QA engineers, tend to be process people.

    Q: What is the difference between software fault and software failure?

    A: Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error.

    A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended.

    Q: What is the role of a QA engineer?

    A: The QA engineer’s role is as follows: We, QA engineers, use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and provide feedback to the developers, i.e. tell them if they’ve achieved the desired level of quality.

    Q: What are the responsibilities of a QA engineer?

    A: Let’s say, an engineer is hired for a small software company’s QA role, and there is no QA team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of the entire product? No, because taking this responsibility is a classic trap that QA people get caught in. Why? Because we QA engineers cannot assure quality. And because QA departments cannot create quality. What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers, they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment only.

    Q: How do you perform integration testing?

    A: To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

    Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input. You CAN learn to perform integration testing, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What is integration testing?

    A: Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

    Q: How do test plan templates look like?

    A: The test plan document template helps to generate test plan documents that describe the objectives, scope, approach and focus of a software testing effort. Test document templates are often in the form of documents that are divided into sections and subsections. One example of a template is a 4-section document where section 1 is the description of the “Test Objective”, section 2 is the description of “Scope of Testing”, section 3 is the description of the “Test Approach”, and section 4 is the “Focus of the Testing Effort”.

    All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for a user to find what they want. With standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

    A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. You CAN learn to generate test plan templates, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What is a bug life cycle?

    A: Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out.

    Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. bug, and ends when the bug is fixed, and the bug is no longer in existence.

    What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.

    Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere.

    If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking, management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

    Q: When do you choose automated testing?

    A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the automated testing tools is usually not worthwhile. Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is that the interpretation of the results (screens, data, logs, etc.) can be a time-consuming task. You CAN learn to use automated tools, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What other roles are in testing?

    A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.

    Depending on the project, one person can and often will wear more than one hat. For instance, we, Test Engineers, often wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager as well.

    Q: Which of these roles are the best and most popular?

    A: As to popularity, if we count the number of applicants and resumes, software developer positions tend to be the most popular among software engineers. As to testing, tester roles tend to be the most popular. Less popular roles are the roles of System Administrators, Test/QA Team Leads, and Test/QA Managers.

    As to “best” roles, the best ones are the ones that make YOU happy. The best job is the one that works for YOU, using the skills, resources, and the talents YOU have.

    To find the “best” role, you want to experiment and “play” different roles. Persistence, combined with experimentation, will lead to success!

    Q: What’s the difference between priority and severity?

    A: The word “priority” is associated with scheduling, and the word “severity” is associated with standards. “Priority” means something is afforded or deserves prior attention; a precedence established by urgency or order of or importance.

    Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles. For example, a severe code of behavior.

    The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. The fixes are based on project priorities and severity of bugs. The severity of a problem is defined in accordance to the end client’s risk assessment, and recorded in their selected tracking tool. Buggy software can severely affect schedules, which, in turn can lead to a reassessment and renegotiation of priorities.

    Q: What is the difference between efficient and effective?

    A: “Efficient” means having a high ratio of output to input; which means working or producing with a minimum of waste. For example, “An efficient engine saves gas.” Or, “An efficient test engineer saves time”.

    “Effective”, on the other hand, means producing or capable of producing an intended result, or having a striking effect. For example, “For rapid long-distance transportation, the jet engine is more effective than a witch’s broomstick”. Or, “For developing software test procedures, engineers specializing in software testing are more effective than engineers who are generalists”.

    Q: What is the difference between verification and validation?

    A: Verification takes place before validation, and not vice versa.

    Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.

    The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product.

    The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.

    Q: What is upwardly compatible software?

    A: “Upwardly compatible software” is software that is compatible with a later or more complex version of itself. For example, upwardly compatible software is able to handle files created by a later version of itself.

    Q: What is upward compression?

    A: In software design, “upward compression” means a form of demutualization in which a subordinate module is copied into the body of a superior module.

    Q: What is usability?

    A: “Usability” means ease of use; the ease with which a user can learn to operate, prepares inputs for, and interprets the outputs of a software product.

    Q: What is V&V?

    A: “V&V” is an acronym that stands for verification and validation.

    Q: What is verification and validation (V&V)?

    A: Verification and validation (V&V) is a process that helps to determine if the software requirements are complete, correct; and if the software of each development phase fulfills the requirements and conditions imposed by the previous phase; and if the final software complies with the applicable software requirements.

    Q: What is a waterfall model?

    A: Waterfall is a model of the software development process in which the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase are performed in that order, probably with overlap, but with little or no iteration.

    Q: What are the phases of the software development life cycle?

    A: The software development life cycle consists of the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase.

    Q: What is the difference between system testing and integration testing?

    A: “System testing” is a high level testing, and “integration testing” is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.

    For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.

    The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

    Q: What types of testing can you tell me about?

    A: Each of the followings represents a different type of testing: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.

    Q: What is disaster recovery testing?

    A: “Disaster recovery testing” is testing how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems.

    Q: How do you conduct peer reviews?

    A: The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-through, and typically consists of 3-10 people including the test lead, task lead (the author of whatever is being reviewed) and a facilitator (to make notes). The subject of the PDR is typically a code block, release, or feature, or document. The purpose of the PDR is to find problems and see what is missing, not to fix anything. The result of the meeting is documented in a written report. Attendees should prepare for PDRs by reading through documents, before the meeting starts; most problems are found during this preparation.

    Why is the PDR great? Because it is a cost-effective method of ensuring quality, because bug prevention is more cost effective than bug detection.

    Q: How do you check the security of an application?

    A: To check the security of an application, one can use security/penetration testing. Security/penetration testing is testing how well a system is protected against unauthorized internal, or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

    Q: What stage of bug fixing is the most cost effective?

    A: Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost effective than bug detection.

    Q: What types of white box testing can you tell me about?

    A: Clear box testing, glass box testing, and open box testing.

    Clear box testing is white box testing. Glass box testing is also white box testing. Open box testing is also white box testing.

    White box testing is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

    Q: What black box testing types can you tell me about?

    A: Functional testing, system testing, acceptance testing, closed box testing, integration testing. Functional testing is a black box testing geared to functional requirements of an application. System testing is also a black box testing. Acceptance testing is also a black box testing. Closed box testing is also a black box testing. Integration testing is also a black box testing.

    Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.

    Q: Is regression testing performed manually?

    A: The answer to this question depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing.

    Q: What is good about PDRs?

    A: PDRs are informal meetings, and I do like all informal meetings. PDRs make perfect sense, because they’re for the mutual benefit of you and your end client.

    Your end client requires a PDR, because they work on a product, and want to come up with the very best possible design and documentation. Your end client requires you to have a PDR, because when you organize a PDR, you invite and assemble the end client’s best experts and encourage them to voice their concerns as to what should or should not go into the design and documentation, and why.

    When you’re a developer, designer, author, or writer, it’s also to your advantage to come up with the best possible design and documentation. Therefore you want to embrace the idea of the PDR, because holding a PDR gives you a significant opportunity to invite and assemble the end client’s best experts and make them work for you for one hour, for your own benefit. To come up with the best possible design and documentation, you want to encourage your end client’s experts to speak up and voice their concerns as to what should or should not go into your design and documentation, and why.

    Q: Give me a list of ten good things about PDRs!

    A: Number 1: PDRs are easy, because all your meeting attendees are your co-workers and friends.

    Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you produce better designs and better documents than the ones you could come up with, without the help of your meeting attendees.

    Number 3: Preparation for PDRs helps a lot, but, in the worst case, if you had no time to read every page of every document, it’s still OK for you to show up at the PDR.

    Number 4: It’s technical expertise that counts the most, but many times you can influence your group just as much, or even more so, if you’re dominant or have good acting skills.

    Number 5: PDRs are easy, because, even at the best and biggest companies, you can dominate the meeting by being either very negative, or very bright and wise.

    Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and wisest meeting attendees are usually gentle on you; they deliver gentle suggestions that are constructive, not destructive.

    Number 7: You get many-many chances to express your ideas, every time a meeting attendee asks you to justify why you wrote what you wrote.

    Number 8: PDRs are effective, because there is no need to wait for anything or anyone; because the attendees make decisions quickly (as to what errors are in your document). There is no confusion either, because all the group’s recommendations are clearly written down for you by the PDR’s facilitator.

    Number 9: Your work goes faster, because the group itself is an independent decision making authority. Your work gets done faster, because the group’s decisions are subject to neither oversight nor supervision.

    Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and they work for you, for FREE!

    Q: What is the exit criteria?

    A: The “exit criteria” is a checklist, sometimes known as the “PDR sign-off sheet”. It is a list of peer design review related tasks that have to be done by the facilitator or attendees of the PDR, either during or near the conclusion of the PDR.

    By having a checklist, and by going through the checklist, the facilitator can verify that A) all attendees have inspected all the relevant documents and reports, B) all suggestions and recommendations for each issue have been recorded, and C) all relevant facts of the meeting have been recorded.

    The facilitator’s checklist includes the following questions:

    • Have we inspected all the relevant documents, code blocks, or products?
    • Have we completed all the required checklists?
    • Have I recorded all the facts relevant to this peer review?
    • Does anyone have any additional suggestions, recommendations, or comments?
    • What is the outcome of this peer review?

    As the end of the PDR, the facilitator asks the attendees to make a decision as to the outcome of the PDR, i.e. “What is our consensus… are we accepting the design (or document or code)?” Or, “Are we accepting it with minor modifications?” Or, “Are we accepting it after it has been modified and approved through e-mails to the attendees?” Or, “Do we want another peer review?” This is a phase, during which the attendees work as a committee, and the committee’s decision is final.

    Q: What is the entry criteria?

    A: The entry criteria is a checklist, or a combination of checklists that includes the “developer’s checklist”, “testing checklist”, and the “PDR checklist”. Checklists are lists of tasks that have to be done by developers, testers, or the facilitator, at or before the start of the PDR.

    Using these checklists, before the start of the PDR, the developer, tester and facilitator can determine if all the documents, reports, code blocks or software products are ready to be reviewed, and if the PDR’s attendees are prepared to inspect them. The facilitator can ask the PDR’s attendees if they have been able to prepare for the peer review, and if they’re not well prepared, the he can send them back to their desks, and even ask the task lead to reschedule the PDR.

    The facilitator’s script for the entry criteria includes the following questions:

    • Are all the required attendees present at the PDR?
    • Have all the attendees received all the relevant documents and reports?
    • Are all the attendees well prepared for this PDR?
    • Have all the preceding life cycle activities been concluded?
    • Are there any changes to the baseline?

    Q: What is the difference between build and release?

    A: Builds and releases are similar, because both builds and releases are end products of software development processes. Builds and releases are similar, because both builds and releases help developers and QA teams to deliver reliable software.

    A build is a version of a software; typically one that is still in testing. A version number is usually given to a released product, but sometimes a build number is used instead.

    Difference number one: “Build” refers to software that is still in testing, but “release” refers to software that is usually no longer in testing.

    Difference number two: “Builds” occur more frequently; “releases” occur less frequently.

    Difference number three: “Versions” are based on “builds”, and not vice versa. Builds (or a series of builds) are generated first, as often as one build per every morning (depending on the company), and then every release is based on a build (or several builds), i.e. the accumulated code of several builds.

    Q: What is CMM?

    A: CMM is an acronym that stands for Capability Maturity Model. As to efforts in developing and testing software, the idea of CMM is that concepts and experiences do not always point us in the right direction, therefore we should develop processes, and then refine those processes. There are five CMM levels, of which Level 5 is the highest…

    • CMM Level 1 is called “Initial”.
    • CMM Level 2 is called “Repeatable”.
    • CMM Level 3 is called “Defined”.
    • CMM Level 4 is called “Managed”.
    • CMM Level 5 is called “Optimized”.

    CMM assessments take two weeks. They’re conducted by a nine-member team led by a SEI-certified lead assessor. There are not many Level 5 companies; most hardly need to be. Within the United States, fewer than 8% of software companies are rated CMM Level 4, or higher. The U.S. government requires that all companies with federal government contracts maintain a minimum of a CMM Level 3 assessment.

    Q: What are CMM levels and their definitions?

    A: There are five CMM levels, of which Level 5 is the highest.

    CMM Level 1 is called “Initial”. The software process is at CMM Level 1, if it is an ad hoc process. At CMM Level 1, few processes are defined, and success, in general, depends on individual effort and heroism.

    CMM Level 2 is called “Repeatable”. The software process is at CMM Level 2, if the subject company has some basic project management processes, in order to track cost, schedule, and functionality. Software processes are at CMM Level 2, if necessary processes are in place, in order to repeat earlier successes on projects with similar applications. Software processes are at CMM Level 2, if there are requirements management, project planning, project tracking, subcontract management, QA, and configuration management.

    CMM Level 3 is called “Defined”. The software process is at CMM Level 3, if the software process is documented, standardized, and integrated into a standard software process for the subject company. The software process is at CMM Level 3, if all projects use approved, tailored versions of the company’s standard software process for developing and maintaining software. Software processes are at CMM Level 3, if there are process definition, training programs, process focus, integrated software management, software product engineering, intergroup coordination, and peer reviews.

    CMM Level 4 is called “Managed”. The software process is at CMM Level 4, if the subject company collects detailed data on the software process and product quality, and if both the software process and the software products are quantitatively understood and controlled. Software processes are at CMM Level 4, if there are software quality management (SQM), and quantitative process management.

    CMM Level 5 is called “Optimized”. The software process is at CMM Level 5, if there is continuous process improvement, if there is quantitative feedback from the process, and from piloting innovative ideas and technologies. Software processes are at CMM Level 5, if there are process change management, and defect prevention technology change management.

    Q: What is the difference between bug and defect in software testing?

    A: In software testing, the difference between “bug” and “defect” is small, and also depends on the end client. For some clients, bug and defect are synonymous, while others believe bugs are subsets of defects.

    Difference number one: In bug reports, the defects are easier to describe.

    Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief explanations.

    Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company.

    Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they’ve achieved the desired level of quality.

    Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are the results of software testing.

    Q: What is grey box testing?

    A: Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.

    Q: What is the difference between version and release?

    A: Both version and release indicate particular points in the software development life cycle, or in the life cycle of a document. Both terms, version and release, are similar, i.e. pretty much the same thing, but there are minor differences between them.

    Minor difference number 1: Version means a variation of an earlier or original type. For example, you might say, “I’ve downloaded the latest version of XYZ software from the Internet. The version number of this software is _____”

    Minor difference number 2: Release is the act or instance of issuing something for publication, use, or distribution. Release means something thus released. For example, “Microsoft has just released their brand new gaming software known as _______”

    Q: What is data integrity?

    A: Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.

    In databases, important data - including customer information, order database, and pricing tables - may be stored. In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data.

    Q: How do you test data integrity?

    A: Data integrity is tested by the following tests:
    Verify that you can create, modify, and delete any data in tables.
    Verify that sets of radio buttons represent fixed sets of values.
    Verify that a blank value can be retrieved from the database.
    Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
    Verify that the default values are saved in the database, if the user input is not specified.
    Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.

    Why do we perform data integrity testing? Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.

    Q: What is data validity?

    A: Data validity is the correctness and reasonablenesss of data. Reasonableness of data means that, for example, account numbers falling within a range, numeric data being all digits, dates having a valid month, day and year, and spelling of proper names. Data validity errors are probably the most common, and most difficult to detect (data-related) errors.

    What causes data validity errors? Data validity errors are usually caused by incorrect data entries, when a large volume of data is entered in a short period of time. For example, a data entry operator enters 12/25/2010 as 13/25/2010, by mistake, and this data is therefore invalid. How can you reduce data validity errors? You can use one of the following two, simple field validation techniques.

    Technique 1: If the date field in a database uses the MM/DD/YYYY format, then you can use a program with the following two data validation rules: “MM” should not exceed “12″, and “DD” should not exceed “31″.

    Technique 2: If the original figures do not seem to match the ones in the database, then you can use a program to validate data fields. You can compare the sum of the numbers in the database data field to the original sum of numbers from the source. If there is a difference between the two figures, it is an indication of an error in at least one data element.

    Q: Tell me about the TestDirector®

    A: The TestDirector® is a software tool that helps software QA professionals to gather requirements, to plan, schedule and run tests, and to manage and track defects/issues/bugs. It is a single browser-based application that streamlines the software QA process.

    The TestDirector’s “Requirements Manager” links test cases to requirements, ensures traceability, and calculates what percentage of the requirements are covered by tests, how many of these tests have been run, and how many have passed or failed.

    As to planning, the test plans can be created, or imported, for both manual and automated tests. The test plans then can be reused, shared, and preserved.

    The TestDirector’s “Test Lab Manager” allows you to schedule tests to run unattended, or run even overnight.

    The TestDirector’s “Defect Manager” supports the entire bug life cycle, from initial problem detection through fixing the defect, and verifying the fix.

    Additionally, the TestDirector can create customizable graphs and reports, including test execution reports and release status assessments.

    Q: Why should I use static testing techniques?

    A: There are several reasons why one should use static testing techniques.

    Reason number 1: One should use static testing techniques because static testing is a bargain, compared to dynamic testing.

    Reason number 2: Static testing is up to 100 times more effective. Even in selective testing, static testing may be up to 10 times more effective. The most pessimistic estimates suggest a factor of 4.

    Reason number 3: Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs by static testing is many times lower than detecting bugs by dynamic testing.

    Reason number 4: About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing.

    Reason number 5: If one uses neither static nor dynamic test tools, the static tools offer greater marginal benefits.

    Reason number 6: If an urgent deadline looms on the horizon, the use of dynamic testing tools can be omitted, but tool-supported static testing should never be omitted.

    Q: What is smoke testing?

    A: Smoke testing is a relatively simple check to see whether the product “smokes” when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.

    With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that can be performed.

    Smoke testing, by definition, is not exhaustive, but, over time, you can increase your coverage of smoke testing.

    A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested.

    Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly. Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration. At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.

    Q: What is the difference between monkey testing and smoke testing?

    A: Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the the goal of exposing any major problems.

    Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.

    Difference number 3: Monkey testing is performed by “monkeys”, while smoke testing is performed by skilled testers.

    Difference number 4: “Smart monkeys” are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.

    Difference number 5: “Dumb monkeys” are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.

    Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.

    Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.

    Difference number 8: Monkey testing takes “six monkeys” and a “million years” to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.

    Q: Tell me about daily builds and smoke tests.

    A: The idea is to build the product every day, and test it every day. The software development process at Microsoft and many other software companies requires daily builds and smoke tests. According to their process, every day, every single file has to be compiled, linked, and combined into an executable program; and then the program has to be “smoke tested”.

    Smoke testing is a relatively simple check to see whether the product “smokes” when it runs.

    Please note that you should add revisions to the build only when it makes sense to do so. You should to establish a build group, and build daily; set your own standard for what constitutes “breaking the build”, and create a penalty for breaking the build, and check for broken builds every day.

    In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You should make the smoke test evolve, as the system evolves. You should build and smoke test Daily, even when the project is under pressure.

    Think about the many benefits of this process! The process of daily builds and smoke tests minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis, improves morale, enforces discipline, and keeps pressure cooker projects on track. If you build and smoke test DAILY, success will come, even when you’re working on large projects!

    Q: What is the purpose of test strategy?

    A: Reason number 1: The number one reason of writing a test strategy document is to “have” a signed, sealed, and delivered, FDA (or FAA) approved document, where the document includes a written testing methodology, test plan, and test cases.

    Reason number 2: Having a test strategy does satisfy one important step in the software testing process.

    Reason number 3: The test strategy document tells us how the software product will be tested.

    Reason number 4: The creation of a test strategy document presents an opportunity to review the test plan with the project team.

    Reason number 5: The test strategy document describes the roles, responsibilities, and the resources required for the test and schedule constraints.

    Reason number 6: When we create a test strategy document, we have to put into writing any testing issues requiring resolution (and usually this means additional negotiation at the project management level).

    Reason number 7: The test strategy is decided first, before lower level decisions are made on the test plan, test design, and other testing issues.

    Q: Give me one test case that catches all the bugs!

    A: On the negative side, if there was a “magic bullet”, i.e. the one test case that was able to catch ALL the bugs, or at least the most important bugs, it’d be a challenge to find it, because test cases depend on requirements; requirements depend on what customers need; and customers have great many different needs that keep changing. As software systems are changing and getting increasingly complex, it is increasingly more challenging to write test cases.

    On the positive side, there are ways to create “minimal test cases” which can greatly simplify the test steps to be executed. But, writing such test cases is time consuming, and project deadlines often prevent us from going that route. Often the lack of enough time for testing is the reason for bugs to occur in the field.

    However, even with ample time to catch the “most important bugs”, bugs still surface with amazing spontaneity. The fundamental challenge is, developers do not seem to know how to avoid providing the many opportunities for bugs to hide, and testers do not seem to know where the bugs are hiding.

    Q: What is a test scenario?

    A: The terms “test scenario” and “test case” are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.

    Q: What is the difference between a test plan and a test scenario?

    A: Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application.

    Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results.

    Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end.

    Q: Can you give me a few sample test cases?

    A: For instance, if one of the requirements is, “The brake lights shall be on, when the brake pedal is depressed”, then, based on this one requirement, I would write all of the following test cases:

    Test case number “101″: “Inputs:” The headlights are on. The brake pedal is depressed. “Expected result:” The brake lights are on. Verify that the brake lights are on, when the brake pedal is depressed.

    Test case number “102″: “Inputs:” The left turn lights are on. The brake pedal is depressed. “Expected result”: The brake lights are on. Verify that the brake lights are on, when the brake pedal is depressed.”

    Test case number “103″: “Inputs”: The right turn lights are on. The brake pedal is depressed. “Expected result”: The brake lights are on. Verify that the brake lights are on, when the brake pedal is depressed.

    As you might have guessed, to verify this one particular requirement, one could write many-many additional test cases, but you get the idea.

    Q: What is a requirements test matrix?

    A: The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project’s life cycle.

    The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.

    The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.

    The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.

    Q: Can you give me a requirements test matrix template?

    A: For a requirements test matrix template, you want to visualize a simple, basic table that you create for cross-referencing purposes.

    Step 1: Find out how many requirements you have.

    Step 2: Find out how many test cases you have.

    Step 3: Based on these numbers, create a basic table. If you have a list of 90 requirements and 360 test cases, you want to create a table of 91 rows and 361 columns.

    Step 4: Focus on the the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of the table.

    Step 5: Now switch your attention to the the first row of the table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of the table.

    Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for the sake of this example, test case number 64 satisfies requirement number 12, then put a large “X” into cell 13-65 of your table… and then you have it; you have just created a requirements test matrix template that you can use for cross-referencing purposes.

    Q: What is reliability testing?

    A: Reliability testing is designing reliability test cases, using accelerated reliability techniques - for example step-stress, test / analyze / fix, and continuously increasing stress testing techniques - AND testing units or systems to failure, in order to obtain raw failure time data for product life analysis.

    The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer’s reliability requirements.

    In the system test phase, or after the software is fully developed, one reliability testing technique we use is a test / analyze / fix technique, where we couple reliability testing with the removal of faults.

    When we identify a failure, we send the software back to the developers, for repair. The developers build a new version of the software, and then we do another test iteration.

    Then we track failure intensity - for example failures per transaction, or failures per hour - in order to guide our test process, and to determine the feasibility of the software release, and to determine whether the software meets the customer’s reliability requirements.

    Q: What is stress testing?

    A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service tools.

    Q: What is load testing?

    A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn load testing, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What is the difference between stress testing and load testing?

    A: The term, stress testing, is often used synonymously with performance testing, reliability testing, and volume testing, and load testing. Load testing is a blanket term that is used in many different ways across the professional software testing community. Load testing generally stops short of stress testing. During stress testing, the load is so great that the expected results are errors, though there is gray area in between stress testing and load testing.

    Q: What is the difference between performance testing and load testing?

    A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What is the difference between reliability testing and load testing?

    A: The term, reliability testing, is often used synonymously with load testing. Load testing is a blanket term that is used in many different ways across the professional software testing community. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

    Q: What is incremental testing?

    A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

    Q: What is alpha testing?

    A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.

    Q: What is beta testing?

    A: Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.

    Q: What is the difference between alpha and beta testing?

    A: Alpha testing is performed by in-house developers and in-house software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public. Beta testing is performed after the alpha testing is completed.

    Q: What is clear box testing?

    A: Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!

    Q: What is boundary value analysis?

    A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.

    Q: What is ad hoc testing?

    A: Ad hoc testing is a testing approach; it is the least formal testing approach.

    Q: What is gamma testing?

    A: Gamma testing is testing of software that does have all the required features, but did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.

    Q: What is glass box testing?

    A: Glass box testing is the same as white box testing. It’s a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

    Q: What is open box testing?

    A: Open box testing is same as white box testing. It’s a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.

    Q: What is black box testing?

    A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software. You CAN learn to do black box testing, with little or no outside help. You CAN get free information. Click on a link!

    Q: What is functional testing?

    A: Functional testing is the same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.

    Q: What is closed box testing?

    A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.

    Q: What is bottom-up testing?

    A: Bottom-up testing is a technique of integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.

    Q: How do you know when to stop testing?

    A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment that complete testing can never be done. Common factors in deciding when to stop are…

    • Deadlines, e.g. release deadlines, testing deadlines;
    • Test cases completed with certain percentage passed;
    • Test budget has been depleted;
    • Coverage of code, functionality, or requirements reaches a specified point;
    • Bug rate falls below a certain level; or
    • Beta or alpha testing period ends.

    Q: What is configuration management?

    A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes.

    Rob Davis has had experience with a full range of CM tools and concepts. He can easily adapt to your software tool and process needs.

    Q: What should be done after a bug is found?

    A: When a bug is found, it needs to be communicated and assigned to developers that can fix it.

    After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere.

    If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/ management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, and reproduce and fix it.

    Q: What is a test plan?

    A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document helps people outside the test group to understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that no one outside the test group can understand it.

    Q: What if there isn’t enough time for thorough testing?

    A: Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:

    • Which functionality is most important to the project’s intended purpose?
    • Which functionality is most visible to the user?
    • Which functionality has the largest safety impact?
    • Which functionality has the largest financial impact on users?
    • Which aspects of the application are most important to the customer?
    • Which aspects of the application can be tested early in the development cycle?
    • Which parts of the code are most complex and thus most subject to errors?
    • Which parts of the application were developed in rush or panic mode?
    • Which aspects of similar/related previous projects caused problems?
    • Which aspects of similar/related previous projects had large maintenance expenses?
    • Which parts of the requirements and design are unclear or poorly thought out?
    • What do the developers think are the highest-risk aspects of the application?
    • What kinds of problems would cause the worst publicity?
    • What kinds of problems would cause the most customer service complaints?
    • What kinds of tests could easily cover multiple functionalities?
    • Which tests will have the best high-risk-coverage to time-required ratio?

    Q: What if the project isn’t big enough to justify extensive testing?

    A: If the project isn’t big enough to justify extensive you need to consider the impact of project errors, not the size of the project.

    However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under “What if there isn’t enough time for thorough testing?” do apply; and then the test engineer should do “ad hoc” testing, or write up a limited test plan based on the risk analysis.

    Q: What can be done if the requirements are changing continuously?

    A: If the requirements are changing continuously, you want to work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance.

    It is helpful if the application’s initial design allows for some adaptability, so that any later changes do not require redoing the application from scratch.

    Additionally, try to…

    • Ensure the code is well commented and well documented; this makes changes easier for the developers;
    • Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes;
    • Allow for some extra time commensurate with probable changes in the project’s initial schedule;
    • Move new requirements to the ‘Phase 2′ version of the application and use the original requirements for the ‘Phase 1′ version;
    • Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application;
    • Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes; then let management or the customers decide if the changes are warranted; after all, that’s their job;
    • Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes;
    • Design some flexibility into automated test scripts;
    • Focus initial automated testing on application aspects that are most likely to remain unchanged;
    • Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
    • Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;
    • Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

    Q: What if the application has functionality that wasn’t in the requirements?

    A: It can take a serious effort to determine if an application has significant unexpected or hidden functionality, which can indicate deeper problems in the software development process.

    If the functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.

    If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the unexpected functionality only affects areas, e.g. minor improvements in user interface, then it may not be a significant risk.

    Q: Why do you recommend that we test during the design phase?

    A: I recommend that we test during the design phase because testing during the design phase can prevent defects later on. I recommend verifying three things…

    1. Verify the design is good, efficient, compact, testable and maintainable.
    2. Verify the design meets the requirements and is complete (i.e. specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module, and how to guarantee the state of each module).
    3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product.

    Q: What is parallel/audit testing?

    A: Parallel/audit testing is a type of testing where the tester reconciles the output of the new system to the output of the current system, in order to verify the new system operates correctly.

    Q: What is end-to-end testing?

    A: End-to-end testing is similar to system testing. It is the ‘macro’ end of the test scale, i.e. the testing of the complete application in a situation that mimics real world use, such as using network communication, interacting with a database, other hardware, application, or system.

    Q: What is regression testing?

    A: What is regression testing is the type of testing that ensures the software remains intact. A baseline sets of data and scripts are maintained and executed to verify that changes introduced during the release have not “undone” any previous code. Then expected results from the baseline are compared to results of the software under test. All discrepancies have to be highlighted and accounted for, before the testing proceeds to the next level.

    Q: What is sanity testing?

    A: Sanity testing is cursory testing, and performed whenever cursory testing ‘is’ sufficient to prove the application is functioning according to specifications. Sanity testing is a subset of regression testing. It normally includes a set of core tests such as basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

    Q: What is installation testing?

    A: Installation testing is testing full, partial, upgrade, install, or uninstall processes. The installation test for production release is conducted with the objective of demonstrating production readiness. Installation testing includes the inventory of configuration items, performed by the application’s system administrator, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed, if needed.

    Q: What is security/penetration testing?

    A: Security/penetration testing is testing how well the system is protected against unauthorized internal access, external access, or willful damage. Security/penetration testing usually requires sophisticated testing techniques.

    Q: What is recovery/error testing?

    A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

    Q: What are the parameters of performance testing?

    A: Performance testing verifies loads, volumes, and response times, as defined by requirements. Performance testing is a part of system testing, but it is also a distinct level of testing.

    The term ‘performance testing’ is often used synonymously with stress testing, load testing, reliability testing, and volume testing.

    Q: What is the difference between volume testing and load testing?

    A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

    Read more
    Related Posts with Thumbnails