Sunday, April 04, 2010

DNS Server Questions

1)List the types of DNS servers?
Ans: Standard primary, standard secondary, active directory integrated zone, root server, caching only, and forwarders, master.

2)what is ttl?
Ans: time to live

3)What is PTR?
Ans: Used to map IP address to their host names. These records only used in reverse lookup zone.

4)what is the primary purpose of DNS?
Ans: For host resolution.

5) what is start of authority?
Ans: It contains serial no. , this indicates the modification done to the zone.

6)what is Dynamic DNS?
Ans: Dynamically update the service records

7)what is the maximum character size of DNS?
Ans:63

9)what is zone or zone file?
Ans: A zone is a Database for either a DNS domain or for a DNS domain and one or more of it’s Sub domains. This storage database is special text file called zone or zone file.

11)why multiple DNS services are created for the same zone?
Ans: load balancing, fault tolerance.

12)what is caching only server?
Ans: Caching only servers does not stores only zones.it resolves host names
To IP address for client computers and stores the resulting mapping information in it’s cache. this DNS server provides the cached information to the client computer with contacting other DNS servers to resolve the query.
It is the temporary storage of zone information.

13)what is zone transfer?
Ans: The process of copying zone to a standard DNS server is called zone transfer.

14)what is master DNS server?
Ans: As the DNS contains the master copy of the zone information is called Master DNS.


15)what is forwarders?
Ans: The queries of one server will be forwarded to other DNS act as forwarder by internal name resolution.

17)which protocol is supported by DNS server?
Ans: Dynamic Updated protocol.

18)what are four service records?
Ans: _msdcs,_sites,_tcp,_udp

19) what are six service records in win 2003?
Ans: -msdcs: (Microsoft Domain controller service)
It contains the information which domain controller is hosting the zone.
Site: In which site the zone has been configured.
Tcp& Udp: These are two protocols that are responsible for communicating with active directory.
Domain DNS Zones & Forest DNS Zones:
In which domain & Forest, DNS has be configured the information.

19)what is Resource record?
Ans: The entries are in zone is called Resource record. The entry may be host name IP address mapping entry.

20)what is the primary thing you have to do on a DNS server before it starts resolution of host name?

21)when will you configure root DNS server?
Ans: : A root server should be used only when a network is not connected to the internet or when a network is connected to the internet or when a network is connected to the internet by using a proxy server
22)what is forward lookup zone?
Ans:Resolves hostnames to ip address.

23)what is reverse look up zone?
Ans: Resolves ip address to hostnames.

24)what is standard primary zone?
Ans: Standard primary DNS server stores DNS entries(IP address to host mapping and other DNS resource records ) in zone file that is maintained on the server. The primary server maintains the master copy of zone file. When changes need to be the zone they should be made only standard primary server.

25)what is standard secondary zone?
Ans: Standard secondary DNS server stores copies of zones from the standard primary.


26) what is root server?
Ans:Root server contains a copy of a zone for the root domain – either the root domain for the internet, or the root domain for a company private, internal network. the purpose of the root server is to enable other DNS servers on a network to access the second level domains on the internet.
Note: A root server should be used only when a network is not connected to the internet or when a network is connected to the internet or when a network is connected to the internet by using a proxy server

27)what is round robin?
Ans: Round robin is used when multiple servers (such as web servers) have identical configurations and identical host names ,but different IP addresses.

28) can you configure root server to use a forwarder?
Ans: NO.

29)what are Root hints?
Ans:Root hints are server names and ip address combination that point to the root servers located either on the internet or on your organization private network.
Root hint tab contains list of DNS Servers can contract to resolve client DNS queries.
Maintains all the information of 13 root servers.

32)what is Active Directory integrated zone?
Ans: Active directory integrated DNS server just like standard primary except DNS entries stored in active directory data store rather than in a zone file. Active directory supports multi master replication when changes need to be made to the zone. They can be on any active directory –integrated DNS server that containg the zone.

33)what is simple query?
Ans: A simple query is a query that DNS server can resolve without contacting any other DNS servers.

34) what is recursive query?
Ans: a recursive is a query that can’t resolve it self it must be contract one or more additional DNS servers to resolve the query.

35) what is scavenging?
Ans: Scavenging is the process of searching for and Deletes stele resource records in a zone
PTR: Pointer resource record
SRV: Service locator resource record



36)What is SRV?
Ans: Used to map specific service (tcp/ip) to list of servers that provide that service.

37) What is CNAME?
Ans: Alias resource record .used to map an additional host name to the actual name of the host.

38) What is stub zone in 2003?
Ans: stub zone contains the information of Name Server & start of authority. It gives the information in which system, in which server, in which domain DNS has been configured
The properties of DNS in Advanced Tab
(Disable Recursion or disable forwarder)
By default this option is unchecked telling that recursive property
is present.
BIND Secondaries:
The zone transfers between the primary & secondary (replication between primary and secondary) BIND is responsible.
Fail on load if bad zone data:
This option is unchecked telling that even if the zone contains some errors it will be loaded if it is checked the zone will not be loaded.
Enable Round Robin:
If the same zone is present in the same subnet the query will be passed on round robin passion until it gets resolved.
Enable Net Mask ordering:
This option is utilized for DNS Server maintained on multihome pc ( A pc Having multiple nic cards ) and solving the queries of diff clients subnets
Secure cache against pollution :
It secures the cache information by not storing the information of unauthorized DNS servers.











DNS TROUBLESHOOTING


50)How to check AD DNS Registration
Ans:You should have four folders with the following names under DNS forward lookup zones are present when DNS is correctly registering the Active Directory DNS records. These folders are labeled:
_msdcs
_sites
_tcp
_udp


51)A Records appear and disappear randomly
Cause: Your DNS zone is configured to query WINS.

52)Can't logon or join the domain
Ans:If DNS is not set up on the Domain controller correctly, domain-wide issues can occur such as replication between domain controllers. If DNS is not set up on the client correctly, the client may experience many networking and internet issues. Unable log on to the domain or join the domain from a workstation or server, and can't access the Internet indicate that you may have DNS settings issues.

53)Can't open an external website using the same network domain name?
Ans:Create a DNS record for pointing to the www with the public IP.


54)What are Common DNS settings mistakes
1.The domain controller is not pointing to itself for DNS resolution on all network interfaces. Especially, when you have multihomed server, the WAN connection may be assign 127.0.0.1 as DNS ip.
2. The "." zone exists under forward lookup zones in DNS.
3. The clients on LAN do not point the DNS to internal DNS server.

55)Can't find server name for ....: No response from server - DNS Request Timed Out?
Ans: Symptom: When running nslookup, you may receive this message: Can't find server name for ....: No response from server
Cause: the DNS server's reverse lookup zones do not contain a PTR record for the DNS server's IP address. Refer to case 0204BL

56)Can't Find Server Name for Address 127.0.0.1 when running nslookup?
Ans:Cause: You don't have a DNS server specified in your TCP/IP Properties. If you have no DNS server configured on your client, Nslookup will. default to the local loopback address.

57)DNS issue with IP Filtering
Ans:Symptoms: you have a windows 2000 server running IIS for public access with 10 public IPs. The router is broken. We would like to enable IP filtering to block all ports except the port 80 for the web, 25 and 110 for the mail. After enabling IP Filtering, the server can't access any web sites, can't ping yahoo.com and nslookup gets time out.
Cause: IP Filtering block the ports fro DNS.

58)"DNS name does not exist."?
Ans:Cause: 1. Incorrect DNS.
2. The netlogon service tries to register the RR before the DNS service is up.

59)DNS on multi homed server?
Ans:It is not recommended to install DNS on a multihomed server. If you do, you should restrict the DNS server to listen only on a selected address.

60)DNS request time out - ip name lookup failed?
Ans:When troubleshooting Outlook 550 5.7.1 relaying denied - ip name lookup failed by using nslookup to resolve host name,

61)you may receive "DNS request time out...*** Request to mail.chicagotech.net time-out.?
Ans:Possible causes: 1. Incorrect DNS settings.
2. Incorrect TCP/IP settings on the DC.
3. Missing PRT on Reverse Lookup Zones.

62)DNS server can't access the Internet?
Ans:Symptoms: You have a domain controller with DNS. The server can ping router and any public IPs. However, the server can't open any web sites.
Resolution: Check the server DNS settings, especially make sure the server points to the internal DNS instead of the ISP DNS or 127.0.0.1.

63)How to register the DNS RR?
Ans:1. Go to DNS Manager to add it manually.
2. Use netlogon, ipconfig and nbtstat command.

64)How to troubleshoot DNS problems?
Ans:To correct DNS settings and troubleshoot DNS problems, you can 1) run nslookup from a command line is the default dns server the one you expect.
2) use ipconfig /all on client to make sure the client point to correct DNS server and the the DC server points to only itself for DNS by its actual tcp/ip address, and make sure no any ISP DNS listed in tcp/ip properties of any W2K/XP.
3) When the machine loads it should register itself with the DNS. If not, use ipconfig /regiesterdns command.
4) Check Event Viewer to see whether the event logs contain any error information. On both the client and the server, check the System log for failures during the logon process. Also, check the Directory Service logs on the server and the DNS logs on the DNS server.
5) Use the nltest /dsgetdc: domainname command to verify that a domain controller can be located for a specific domain. The NLTest tool is installed with the Windows XP support tools.
6) If you suspect that a particular domain controller has problems, turn on the Netlogon debug logging. Use the NLTest utility by typing nltest /dbflag:0x2000ffff at a command prompt. The information is logged in the Debug folder in the Netlogon.log file. 7) Use DC Diagnosis tool, dcdiag /v to diagnose any errors. If you still have not isolated the problem, use Network Monitor to monitor network traffic between the client and the domain controller.

65)How can I verify a computer DNS entries are correctly registered in DNS?
A: You can use the NSLookup tool to verify that DNS entries are correctly registered in DNS. For example, to verify record registration, use the following commands: nslookup computername.domain.com.

66)How to add DNS and WINS into your Cisco VPN server?
Ans:If your VPN client cannot find servers or cannot ping computer name, you may need to add DNS and WINS into your VPN server. For example, to add DNS and WINS on a Cisco Firewall PIX, add vpdn group 1 client configuration DNS server name and vpdn group 1 client configuration wins wins server name..

67)How to clear bad information in Active Directory-integrated DNS
Ans:You may need to clear bad information in Active Directory-integrated if DNS is damaged or if the DNS contains incorrect registration information. To do that, 1) Change the DNS settings to Standard Primary Zone.
2) Delete the DNS zones.
3) Use ipconfig /flushdns command.
4) Recreate the DNS zones.
5) Restart Net Logon service
6)Use ipconfig /registerdns
68)How to ensure that DNS is registering the Active Directory DNS records?
Ans:To ensure that DNS is registering the Active Directory DNS records, to go DNS Management console>Server name>Forward Lookup Zones>Properties, make sure Allow Dynamic Updates is set to Yes and _msdcs, _sites, _tcp and _udp are correctly registering the Active Directory DNS records. If these folders do not exist, DNS is not registering the Active Directory DNS records. These records are critical to Active Directory functionality and must appear within the DNS zone. You should repair the Active Directory DNS record registration.

69): How does the internal DNS resolve names Internet without the ISP's DNS server?
Ans: As long as the "." zone does not exist under forward lookup zones in DNS, the DNS service uses the root hint servers. The root hint servers are well-known servers on the Internet that help all DNS servers resolve name queries.

70)How to reinstall the dynamic DNS in a Windows 2000 Active Directory?
Ans:Under the following situations you may want to reinstall the DDNS in a Windows 2000 Active Directory:
Some weird DNS errors have occurred and clearing DNS information has been unsuccessful.
Services that depend upon DNS, such as, the File Replication service (FRS) and/or Active Directory are failing.
The secondary DNS server doesn't support dynamic updates.
To reinstall the dynamic DNS in a Windows 2000 Active Directory,
1. Clear the DNS information.
2. Clear the Caching Reslover.
3. Point all DNS servers to the first DNS server under TCP/IP properties.
4. Re-add the zones and configure them to be Active Directory integrated.
5. Register your A resource record for DNS as well as your start of authority (SOA).

71)How to repair the DNS record registration
Ans:To repair the Active Directory DNS record registration:
Check for the existence of a Root Zone entry. View the Forward Lookup zones in the DNS Management console. There should be an entry for the domain. Other zone entries may exist. There should not be a dot (".") zone. If the dot (".") zone exists, delete the dot (".") zone. The dot (".") zone identifies the DNS server as a root server. Typically, an Active Directory domain that needs external (Internet) access should not be configured as a root DNS server.
The server probably needs to reregister its IP configuration (by using Ipconfig) after you delete the dot ("."). The Netlogon service may also need to be restarted. Further details about this step are listed later in this article.
Manually repopulate the Active Directory DNS entries. You can use the Windows 2000 Netdiag tool to repopulate the Active Directory DNS entries. Netdiag is included with the Windows 2000 Support tools. At a command prompt, type netdiag /fix.
To install the Windows 2000 Support tools:
Insert the Windows 2000 CD-ROM.
Browse to Support\Tools.
Run Setup.exe in this folder.
Select a typical installation. The default installation path is Systemdrive:\Program Files\Support Tools.
After you run the Netdiag utility, refresh the view in the DNS Management console. The Active Directory DNS records should then be listed.
NOTE: The server may need to reregister its IP configuration (by using Ipconfig) after you run Netdiag. The Netlogon service may also need to be restarted.
If the Active Directory DNS records do not appear, you may need to manually re-create the DNS zone.
After you run the Netdiag utility, refresh the view in the DNS Management console. The Active Directory DNS records should then be listed. Manually re-create the DNS zone:
Still need help, contact consultant Your feedback and contributions to this web site

72)How to configure DNS Forwarders
Ans:To ensure network functionality outside of the Active Directory domain (such as browser requests for Internet addresses), configure the DNS server to forward DNS requests to the appropriate Internet service provider (ISP) or corporate DNS servers. To configure forwarders on the DNS server:
Start the DNS Management console.
Right-click the name of the server, and then click Properties.
Click the Forwarders tab.
Click to select the Enable Forwarders check box.
NOTE: If the Enable Forwarders check box is unavailable, the DNS server is attempting to host a root zone (usually identified by a zone named only with a period, or dot ("."). You must delete this zone to enable the DNS server to forward DNS requests. In a configuration in which the DNS server does not rely on an ISP DNS server or a corporate DNS server, you can use a root zone entry.
Type the appropriate IP addresses for the DNS servers that will accept forwarded requests from this DNS server. The list reads from the top down in order; if there is a preferred DNS server, place it at the top of the list.
Click OK to accept the changes.

73)DC's FQDN Does Not Match Domain Name?
Ans: Symptoms: After you promote or install a domain controller, the DNS suffix of your computer name may not match the domain name. Or the FQDN does not match the domain name because a NT 4.0 upgrade automatically clears the Change primary DNS suffix when domain membership changes check box. It is not possible to rename the computer on the Network Identification tab. Also, you may receive NETLOGON events in the System Log with ID:5781 or other error messages that indicate a failure to dynamically register DNS records.
Resolutions: 1. After you upgrade to Microsoft Windows 2000, but before you run dcpromo and obtain the Active Directory Installation Wizard, add the following values to the following registry key:
Value name: SyncDomainWithMembership
Value type: REG_DWORD
Value: 1
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\

2. If you have already promoted to a domain controller, use the Active Directory Installation Wizard to demote to a member server. Click to select the Change primary DNS suffix when domain membership changes check box, and then run dcpromo to promote back to a domain controller.
3. Modify HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ and changed domain=mydomain.com, NV Domain=mydomain.com, SyncDomainWithMembership= 1 (here mydomain.com is yhe donaim name).


74)Primary or Active Directory Integrated DNS
Ans:With Active Directory Integrated DNS, this permits all servers to accept updates. Instead of adding standard secondary DNS servers, you can convert the server from a primary DNS server to an Active Directory Integrated Primary server and configure another domain controller to be a DNS server. With Active Directory Integrated DNS servers, all the servers are primary servers, so when a zone change is made at one server, it is replicated to the others, eliminating the need for a zone transfer.

75)2nd DNS Issues
1. When setup 2nd DNS, make sure you type correct Master DNS Server IP address.
2. Make sure primary DNS and 2nd DNS servers can ping each other and not firewall block them.
3. Make sure primary DNS and 2nd DNS servers point to each other as primary and themselves as secondary.

76)Some A Records don't appear in DNS
Cause: 1. incorrect TCP/IP settings.
2. Register this connection's address in DNS is unchecked.

77)The DSA operation is unable to proceed because of a DNS lookup failure.
Symptoms: 1. When trying to DCPROMO, ,you receive: "The operation failed because: The directory service failed to replicate off changes made locally. The DSA operation is unable to proceed because of a DNS lookup failure."
2. The Event Viewer may list Event ID: 1265 - The DSA operation is unable to proceed because of a DNS lookup failure.
3. DCDiag test display this message: "The DSA operation is unable to proceed because of a DNS lookup failure".
Causes: 1. Incorrect TCP/IP configuration.
2. Incorrect DNS configuration
3. Bad information in DNS Manager.

78)“The procedure entry point DsIsManagedDnW could be located in the dynamic link library NTDSAPI.dll”
Ans: Symptom: when trying to run DCDiag and getting the following error, "the procedure entry point DsIsManagedDnW could be located in the dynamic link library NTDSAPI.dll".
Resolutions: 1. Remove the dcdiag.exe from Controller Panel and install it from w2k/xp DC.
2. The "entry point not found" is typical of a service pack mismatch and the dcdiag.exe is out of sync with the service pack level of your system. To fix, go to the service pack x folder, and find "adminpack.msi" Right click it and select install.

79)Troubleshooting the Domain Locator Process
1) Check Event Viewer on both the client and the DNS server for any errors.
Verify that the IP configuration is correct for your network by using ipconfig /all.
Ping both the DNS IP address and the DNS server name to verify network connectivity and name resolution. .
Use nslookup servername.domain.com command to verify that DNS entries are correctly registered in DNS.
If nslookup command does not succeed, use one of the following methods to reregister records with DNS: a) force host record registration by using ipconfig /register dns; b) force domain controller service registration by stopping/restarting the Netlogon service.
If you still have the same issue, use Network Monitor to monitor network traffic between the client and the domain controller.

80)Which DNS does a VPN client use
1. Assuming both LAN connection and VPN connection have the different DNS because they are assigned by different DHCPs, the active DNS goes with the default gateway.
2. You can pick up which DNS you want to use manually.

81)Which ports are used for DNS
Ans:UDP and TCP port 53. However, the internal DNS clients may not hear answers even though the query has been sent out on 53,until you open the UDP port above 1023.

82)Why I can't perform external name resolution to the root hint servers on the Internet?
A: make sure "." zone does not exist under forward lookup zones in DNS. If you do not delete this setting, you may not be able to perform external name resolution to the root hint servers on the Internet.


83) Why do I have to point my domain controller to itself for DNS?
A: The Netlogon service on the domain controller registers a number of records in DNS that enable other domain controllers and computers to find Active Directory-related information. If the domain controller is pointing to the ISP's DNS server, Netlogon does not register the correct records for Active Directory, and errors are generated in Event Viewer. The preferred DNS setting for the domain controller is itself; no other DNS servers should be listed. The only exception to this rule is with additional domain controllers. Additional domain controllers in the domain must point to the first domain controller (which runs DNS) that was installed in the domain and then to themselves as secondary.

84): Everyone can access our web site on the Internet. But no one can
access the web site internally. Instead, we are point to our Intranet.
A: If you network domain name is the same of your web site name, you should point the web to the web public IP. To do this, open DNS manager and create a host. for example www.chicagotech.net=public ip.

85) *** Can't find server name for address w.x.y.z: Timed out
Cause: the DNS server cannot be reached or the service is not running on that computer.
2. *** Can't find server name for address 127.0.0.1: Timed out
Cause: no servers have been defined in the DNS Service Search Order list
3. *** Can't find server name for address w.x.y.z: Non-existent domain
Cause: there is no PTR record for the name server's IP address.
4.*** ns.domain.com can't find child.domain.com.: Non-existent domain
5. *** Can't list domain child.domain.com.: Non-existent domain
Cause: No separate db file for the domain, thus querying that domain or running a zone transfer on it will produce the above errors.

86)What does netdiag /fix do
A:Netdiag /fix switch is very useful tool to correct issues with DNS and domain controller tests. 1. DNS Test: If the computer is a domain controller, Netdiag verifies all the DNS entries in the Netlogon.dns file to determine if they are correct and updates the appropriate entries if there is a problem. 2. Domain Controller Test: If the domain GUID cached in a local computer on your primary domain is different than the domain GUID saved in a domain controller, Netdiag tries to update the domain GUID on the local computer.

Monday, March 29, 2010

Network Security Cryptography



Network Security  Cryptography


This paper tries to present an insight into cryptography, the ways of implementing it, its uses and implications. Cryptography, the art and science of secret codes has been existent right from the advent of human civilization; it has been used to transmit messages safely and secretively across groups of people so that their adversaries did not get to know their secrets. As civilizations progressed more and more complex forms of cryptography came into being, they were now not only symbolic representations in an unrecognizable form but complex mathematical transforms carried out on the messages. In the present day world cryptography plays a major role in safe transmission of data across the Internet, and other means of communications.
In this paper we have dealt with examples of how different crypto algorithms are implemented, and have tried to cite some of the most used crypto algorithms, like DES- the data encryption standard, RSA, IDEA, RC#4, etc. We have also dealt with some of the applications of these algorithms like link encryption, pretty good privacy, public key cryptography, PEM etc. We have also cited some methods of code-breaking or cryptanalysis like the mathematical attack, the brute force attack and the power analysis.
Cryptography
If you want something to stay a secret, don't tell anyone, don't write it down. If you do have to send it to someone else, hide it in another message so that only the right person will understand. Many creative methods of hiding messages have been invented over the centuries. Cryptography can be defined as the art and science of secret codes. It is a collection of techniques that transform data in ways that are difficult to mimic or reverse by some one who does not know the secret. These techniques involve marking transforming and reformatting the messages to protect them from disclosure, change or both. Cryptography in the computer age basically involves the translation of the original message into a new and unintelligible one by a mathematical algorithm using a specific "key". People mean different things when they talk about cryptography. Children play with toy ciphers and secret languages. However, these have little to do with real security and strong encryption. Strong encryption is the kind of encryption that can be used to protect information of real value against organized criminals, multinational corporations, and major governments. Strong encryption used to be only military business; however, in the information society it has become one of the central tools for aintaining privacy and confidentiality.
Why do we need cryptography?
The art of long distance communication has been mastered by civilizations many centuries ago. The transmission of secret political or confidential information was a problem ever since. To solve this
problem to some extent, secret codes were developed by groups of people who had to carry out such kind of secretive communications. These codes were designed to transform words into code words using some basic guide lines known only to their members. Now messages could be sent or received with a reduced danger of hacking or forgery as the code breaker would have to struggle really hard to break the code.
As time progressed and radio, microwave and internet communication developed, more complex and safer codes started to evolve. The traditional use of cryptography was to make messages unreadable to the enemy during wartime. However the introduction of the computing age changed this perspective dramatically. Through the use of computers, a whole new use for information hiding was evolved. Around the early 1970's the private sector began to feel the need for cryptographic methods to protect their data. This could include 'sensitive information' (corporate secrets), password files or personal records.
Needfor Cryptography
Some day to day examples
Encryption technology is used nowadays in almost any of the digital communication systems. For example, the most common one is the satellite T.V or the cable T.V. All the signals are available in the air, but the programs can be viewed only by those subscribers who have made the payment. This is done by a simple password security system. The subscriber gets an authenticated password on payment and can use it only for the time he has paid up after which it gets lapsed. Another common application of the encryption technique is the ATM card. Here also the transaction is done only on the acceptance of a secure and authenticated password. The mobile phones and for that matter even the internet connections are based on small scale cryptographic techniques.
Crypto algorithm
The crypto algorithm specifies the mathematical transformation that is performed on data to encrypt or decrypt it. A crypto algorithm is a procedure that takes the plain text data and transforms it in to cipher text in a reversible way. A good algorithm produces cipher text that yields very few clues about either the key or the plain text that produced it. Some algorithm are for stream ciphers which encrypt a digital data stream bit by bit .The best known algorithm are for the block ciphers which transform data in fixed sized blocks one at a time.
• Stream ciphers
Stream cipher algorithm is designed to accept a crypto key and a stream of plain text to produce a stream of cipher text.
• Block cipher
Block ciphers are designed to take data blocks of a specific size, put them with a key of a particular size and yield a block of cipher text of a certain size. Block ciphers are
analyzed and tested for their ability to encrypt data blocks of their given block size. A reasonable cipher should generate a cipher text that has as few noticeable properties as possible. A statistical analysis of cipher text generated by the block cipher algorithm should find that individual data bits as well as patterns of bits appear completely random. Non random patterns are the first thing for which a code breaker looks as they usually provide the entering wedge needed to crack a code.
Cipher modes
The term cipher mode refers to a set of techniques used to apply to a block cipher to a data stream. Several modes have been developed to disguise repeated plaintext blocks and improve security of the block cipher. Each mode defines a method of combining the plaintext , crypto key, and encrypted cipher text in a special way to generate the stream of cipher text actually transmitted to the recipient In theory there could be countless different ways of combining and feeding back the inputs and outputs of a cipher. In practice, four basic modes are used.
• Electronic Code Book (ECB)
It is the simplest of all the modes .The cipher is simply applied to the plaintext block by block it is the most efficient mode. It can be speedup by using parallel hardware and unlike other modes, does not require an extra data word for seeding a feed back loop. However a block of padding may be needed to guarantee that full blocks are provided for encryption and decryption. ECB has security problems in the sense that repeated plain text blocks yield repeated cipher text blocks.
• Cipher Block Chaining (CBC)
This mode hides patterns in the plaintext block by systematically combining each plaintext block with a cipher text block before actually encrypting it the two blocks are combined bit by bit using the
exclusive or operation. In order to guarantee that there is always some random looking cipher text to apply to the actual plaintext the process is started with a block of random bits called the initialization vector. Two messages will never yield the same cipher text even if the plain texts are identical as long as the initialization vector is different. In most applications the initialization vector is added at the beginning of message in plain text. A shortcoming of CBC is that encrypted messages may be as many as two blocks longer than the same message in ECB mode. One of the blocks is added to transmit the initialization vector to the recipient. Proper decryption depends on the initialization vector to start the feedback process. The other block is added as padding so that a full block is always encrypted or decrypted.
• CFB - Cipher Feedback mode
CFB is similar to CBC in that, it feeds the cipher text block back to the block cipher. However it is different because the block cipher doesn’t directly encrypt the plain text. Instead it is used to generate a constantly varying key that encrypts the plain text with a vernam cipher. In other words blocks of cipher text are exclusive or’ed with successive blocks of data generated by the block cipher to generate the cipher text. This mode is also called the cipher text auto key (CTAK).The advantage with this method is that it is not limited to the cipher block size. This mode can be adapted to work with smaller blocks down to bits. Like CBC however it needs an initialization vector to be sent for decryption.
• OFB - Output Feedback
It is similar to CFB but it is simpler. It uses the block cipher all by itself to generate the vernam keys. The key stream doesn’t depend on the data stream at all. Here the block cipher has nothing to do with processing the message .It is only used to generate the keys. This mode is also called auto key mode. The advantage is that like CFB length of the plain text need not have to fit into block boundaries, also each block requires the initialization vector and doesn’t depend on the data stream, so the decryption key stream can be prepared and kept at the receivers end with the knowledge of the key and the initialization vector.
Crypto Algorithms
1. DES
This is a widely used algorithm. It was developed by IBM (previously Lucifer) and was adopted as an official Federal Information Processing Standard (FIPS PUB 46) in 1976.This algorithm uses a 64 bit key (including 8 parity bits +56 key bits), converting 64 bit blocks of plaintext in to 64 bit blocks of code (block cipher method).This is done by putting the original text through a series of permutations and substitutions. The results are then merged with the original plain text using an XOR operation. This encryption sequence is repeated 16 times using a different arrangement of the key bits each time.
2. One time pads
A one-time pad is a very simple yet completely unbreakable symmetric cipher. That is it uses the same key for encryption as for decryption. As with all symmetric ciphers, the sender must transmit the key to the recipient via some secure channel, otherwise the recipient won't be able to decrypt the cipher ext. The key for a one-time pad cipher is a string of random bits, usually generated by a cryptographically strong pseudo-random number generator (CSPRNG).With a one-time pad, there are as many bits in the key as in the plaintext. This is the primary drawback of a one-time pad, but it is also the source of its perfect security. It is essential that no portion of the key should ever be used for another encryption (therefore the name "one-time pad"), otherwise cryptanalysis can break the cipher. The algorithm is very simple for example an ex-or operation between the plain text and the key, the same ex-or operation would also give back the plain text.
Ciphertext = plaintext (+) key
Plaintext = ciphertext (+) key
However the security of the one time pad is dependant upon the randomness of the generated key. The code is supposed to be safe even from brute force attack, running the text through all possible keys, because equal number of probably correct plaintext messages would be generated.
3. Triple DES
Triple encoding makes DES encoded text even more secure. It is equivalent to having a 112 bit key length. However, triple DES is significantly slower than commercial alternatives with similar key lengths.
4. Rivest Cipher #4
Rc4 is a symmetrical stream cipher developed by Ron Rivest. It has a key whose size can be varied according to the level of security required, generally it can be used with a 128 bit key length. This algorithm is fairly immune to differential crypt analysis but since it is used with short key lengths it is vulnerable to brute force cracking.
5. Idea
Idea is an algorithm which appeared in 1990. It was developed at the Swiss federal institute of technology. Its security is based not on hiding the algorithm but on keeping a secret key. Its key is 128 bit long key which makes it more attractive than DES, and it can be used with the usual block ciphers. This algorithm is publicly available and easy to implement. It is suitable for e-commerce, it can be exported and used world wide. Of late none of the cryptanalysis techniques have worked against IDEA. A brute force attack (with its 128 bit key length) would take trying 1 billion keys/sec for over a billion yrs would still not find the key.
6. Skip Jack
It is a block encryption algorithm developed by NSA (National security agency –USA). It encrypts 64 bit blocks using an 80 bit key. The usual block ciphers can be used to implement it to encrypt streams of data. It is provided in prepackaged encryption chipsets and in the Fortezza crypto card, a pc card containing a crypto processor and storage for keying material. The disadvantage of skipjack is that it is very less publicly known {reportedly to keep NSA’s design techniques secret}.It is fairly resistant to differential cryptanalysis and other short cut attacks. NSA’s skipjack is being promoted to protect military communications in the Defense Messaging System (DMS) which reflects a measure of confidence that skipjack is secure.
7. RSA public key algorithm
The best known and most popular embodiment of the public key idea is the RSA also named after its inventors Ronald Rivest, Adi Shamir and Leonard Adleman. The high level of security the RSA algorithm offers derives from the difficulty of decomposing large integers in to prime factors. Two primes which when multiplied by one another give the original number. Prime factoring of very large numbers is an important field in the number theory .One of the drawbacks with RSA algorithm compared with symmetrical methods is that encrypting and decrypting messages takes much more computing power .The fastest RSA chip now in existence can only manage a through put of 600 k bits when using 512 b it primes. Comparable DES hardware implementations are anything from 1000 to 10000 times faster. At present Des software implementation can encrypt around 100 times faster than the RSA algorithm. Crypt analysis can be done by factorizing the key in to two primes, estimates for factoring a 512 bit key show that computer system running at a million operations a sec (1 MIPS) and using current algorithms would take 420000 years to find the prime factors involved.
8. AES
The AES is a new algorithm that has now replaced DES as the new standard in the NIST. The Advanced Encryption Standard (AES) provides a better combination of safety and speed than DES. Using 128-bit secret keys, AES offers higher security against brute-force attack than the old 56-bit
DES keys, and AES can use larger 192-bit and 256-bit keys, if necessary. AES is a block cipher, and encrypts data in fixed-size blocks, but each AES cycle encrypts 128 bits-twice the size of DES blocks. While DES was designed for hardware, AES runs efficiently in a broad range of environments, from programmable gate arrays, to smart cards, to desktop computer software and browsers. In 2000, NIST selected Rijndael, an encryption algorithm developed by two Belgian cryptographers, as the new AES. There are a few products that already use the Rijndael algorithm, notably Unix's NetBSD open-source version. Rijndael has also appeared as an option in several desktop file-encryption programs. The AES is expected to be the FIPS (Federal information processing standards) quite soon.
Internet cryptography techniques (Applications of the crypto algorithms)
• Point-to-point link encryption
• IP link encryption
• A virtual private network(VPN) constructed with IP security protocol routers
• A VPN constructed with IPSEC firewalls
• Public key algorithm with Pretty Good Privacy(PGP)
• E-mail with privacy enhanced mail (PEM)
• Water marking.
• Point-to-point link encryption
This produces a fully isolated connection between a pair of computers by applying crypto to the data link. It yields the highest security by being the most restrictive in physical and electronic access. It is not necessarily an internet solution since it doesn’t need to use TCP/IP software. It is the simplest design, but the most expensive to implement and extend.
• IP link encryption
This produces a highly secure extensible TCP/IP network by applying crypto to the data link and by restricting physical access to hosts on the network. This architecture blocks communication with untrusted hosts and sites. Sites use point to point interconnections and apply encryption to all traffic on those interconnections.
• VPN construction with IP security
This is a virtual private newt that uses the internet to carry traffic between trusted sites. Crypto is applied at the internet layer using IPSEC. This approach uses encrypting routers and doesn’t provide the sites with access to untrusted internet sites.
• VPN construction with IPSEC firewalls
This is a different approach to the VPN that uses encrypting firewalls instead of encrypting routers. Crypto is still applied at the internet layer using IPSEC (IP security protocol).The firewalls encrypt all traffic between trusted sites and also provide control access to untrusted hosts. Strong firewall access control is necessary to reduce the risk of attacks on the crypto mechanisms as well as attacks on hosts within the trusted sites.
Digital signature
Digital signatures can be used to check the authenticity of the author of the message using the above mentioned technique. In 1991the national institute of standards and technology (NIST) decided on a standard for digital signatures, DSS (digital signature standard). DSS proposes an algorithm for digital signatures (DSA, digital signatures algorithm) although this is not based on the RSA but on the public key implementation of the “discrete logarithm problem” (what value must the exponent x assume to satisfy y= g ^x mod p where p is a prime. While the problem underlying this method is just as hard to solve as RSA’s prime factor decomposition, many people have claimed that DSA’s security is not perfect. After massive criticism its length was finally increased from 512 to 1024 bits. DSS is expected to become an official standard for US Govt. bureaus in not too distant future.
• PEM
PEM is the standard for encrypting messages on the internet’s mail service. It uses both RSA public key method and the symmetrical DES method to send a file in encrypted form, it is first encrypted using a randomly generated DES key generated using a DES algorithm. The DES key itself is then encoded with the recipient’s public key on the RSA system and sent along with the DES encoded file. The advantage of this is that only a small part of the message, the DES key has to be encoded using the time consuming RSA algorithm. The contents of the message itself are encrypted much faster using the DES algorithm alone.
• Message Digests
There is one more important encryption technique worth mentioning and that is the one way function. It is basically a non reversible quick encryption. In other words encrypting is easy but decrypting is not. While encryption could take a few seconds, decryption could take 100s or 1000s or millions of years even for the most powerful computers. These are used basically to test the integrity of a document or file by generating a digital fingerprint using special hash functions on the document. Assume that you have a document to send someone or store for the future and you need a way to prove at sometime that the document has been altered. You run a one way function which produces a fixed length value called a hash (also called a message digest). The hash is a unique signature of a document that you can keep and send with the document. The recipient can run the same one way function to produce a hash that should match the one you sent with the document. If the hashes don’t match the document has been altered or corrupted.
• Water marking
A watermark is that which is actually imperceptibly added to the cover-signal in order to convey the hidden data. It is used to protect the copy rights of the author on the internet. A water mark is a hidden file consisting of either a picture or data that gets copied with the document when ever it is downloaded from the web, and because of this the article cannot be unauthorizedly copied or distributed.
Latest crypto techniques
The policy about regulating technology ends up being obsolete by technological innovations. Trying to regulate confidentiality by regulating encryption closes one door and leaves two open steganography and winnowing.
• Steganography
An encrypted message looks garbage like, and alert people that there is something to hide. But what if the message is totally innocuous looking? This is an old trick that started centuries ago with writing in ink that is invisible until the paper has been heated. The microdot, a piece of film containing a very highly reduced image of the secret message and embedded in the punctuation marks of a normal document, was invented during World War II.. For example if you used the least significant bit of each pixel in a bitmap image to encode a message the impact on the appearance of the image would not be noticeable. This is known as steganography, or covered writing. A 480 pixel wide by 100 pixel high image - smaller than many WWW home page banners, could theoretically contain a message of more than 5,000 characters. The encoding is quite easy with a computer - and no complicated mathematics at
all. And of course the same principles apply to audio and video files as well. The image can be used simply as a carrier, with the message being first encrypted.
• Winnowing and Chaffing
Just as the name suggests the above technique believes in adding chaff (garbage data) to the wheat (message) before sending the message and then winnowing or removing the chaff from the wheat at the receiver. Since winnowing does not use encryption it is not affected by the regulations on crypto products. The message is first broken into packets and then each packet is mac’ed using a mac program such as (HMAC-SHA1). This is very similar to running the program through a hash function. Then chaff is added (chaffing) to the packets of mac’ed data before it is sent. At the receiving end only those packets are accepted that produce the same mac (showing that no changes have been made) and then the chaff is removed, this is called winnowing.
Cryptanalysis
There are many kinds of cryptanalytic techniques:
1) Differential cryptanalysis.
2) Linear cryptanalysis.
3) Brute force cracking
4) Power analysis
5) Timing analysis, etc.
Cryptographers have traditionally analyzed the security of ciphers by modeling crypto algorithms as ideal mathematical objects. A modern cipher is conventionally a black box that accepts plaintext inputs and provides cipher text outputs. Inside this box, the algorithm maps the inputs to the outputs using a predefined function that depends on the value of a secret key. The black box is described mathematically and formal analysis is used to examine the systems security. In a modern cipher an algorithms security rests solely on the concealment of the secret key. Thus attack strategies often reduce to methods that can expose the value of the secret key. Unfortunately hardware implementations of the algorithm can leak information about the secret key, which the adversaries can use.
Mathematical attacks
Techniques such as differential and linear cryptanalysis introduced in early 1990s are representative of traditional mathematical attacks. Differential and linear cryptanalysis attacks work by exploiting statistical properties of crypto algorithms to uncover potential weaknesses. These attacks are not dependent on a particular implementation of the algorithm but on the algorithm itself, therefore they can be broadly applied. Traditional attacks however require the acquisition and manipulation of large amounts of data. Attacks that exploit weaknesses in a particular implementation are an attractive alternative and are often more likely to succeed in practice.
Implementation attacks
The realities of a physical implementation can be extremely difficult to control and often result in unintended leakage of side channel information like power dissipation, timing information, faulty outputs etc. The leaked information is often correlated to the secret key, thus enemies monitoring the information may be able to learn the secret key and breach the security of the crypto system. Algorithms such as DES, RSA which are now being implemented in smart cards also are under a considerable threat. Smart cards are often used to store crypto keys and execute crypto algorithms. Data on the card is also stored using cryptographic techniques.
Power consumption is one of the potential side channel information; generally supplied by an external source it can be directly observed. All calculations performed by the smartcard operate on logical 0s or 1s. Current technological constraints result in differential power consumptions when manipulating a logic one or a logic zero, based on a spectral analysis of the power curve or based on the timing between the one and the zero the secret code can be cracked by the adversaries.
Countermeasures
Many countermeasures are being worked out to prevent implementation attacks such as power analysis or timing analysis. These attacks are normally based on the assumption that the operations being attacked are occurring at fixed intervals of time. If the operations are randomly shifted in time then statistical analyisis of side channel information becomes very difficult. Another side of the coin is that the hardware implementations must be carefully designed so that they do not leak any side channel information. Hard ware design methodologies are often difficult to design, analyse and test, hence software methods of introducing delay or data masking are the only easy methods to overcome this problem.
Conclusion
The internet has brought with it an unparalleled rate of new technology adoption. The commercially established, the industry and the armed forces would need an assortment of cryptographic products and other mechanisms to provide privacy, authentication, message integrity and trust to achieve their missions. These mechanisms demand procedures, policies and law. However, cryptography is not an end unto itself but the enabler of safe business and communication. Good cryptography and policies are therefore as essential for the future of internet based communications as the applications that utilize them.

Tuesday, March 09, 2010

WEB TECHNOLOGY IN LAMP TECHNOLOGY




WEB TECHNOLOGY IN LAMP TECHNOLOGY


LAMP is a shorthand term for a web application platform consisting of Linux, Apache, My SQL and one of Perl or PHP. Together, these open source tools provide a world-class platform for deploying web applications. Running on the Linux operating system, the Apache web server, the My SQL database and the programming languages, PHP or Perl deliver all of the components needed to build secure scalable dynamic websites. LAMP has been touted as “the killer app” of the open source world.

With many LAMP sites running Ebusiness logic and Ecommerce site and requiring 24x 7 uptime, ensuring the highest levels of data and application availability is critical. For organizations that have taken advantage of LAMP, these levels of availability are ensured by providing constant monitoring of the end-to-end application stack and immediate recovery of any failed solution components. Some also supports the movement of LAMP components among servers to remove the need for downtime associated with planned system maintenance.

The paper gives an overview of LINUX, APACHE, MYSQL, and mainly on PHP and its advantage over other active generation tools for interactive web design and its interface with the advanced database like MYSQL and finally the conclusion is provided.








CONTENTS


 Introduction
 Linux
 Apache
 My SQL
 Feature included in my sql
 PHP
 Technologies on the client side
 Technologies on the server side
 The benefits of using PHP server side processing
 Browser and its issues
 Applying LAMP
 When not on to use LAMP?
 Advantages of LAMP
 Conclusion


















INTRODUCTION:
One of the great "secrets" of almost all websites (aside from those that publish static .html pages) is that behind the scenes, the web server is actually just one part of a two or three tiered application server system. In the open source world, this explains the tremendous popularity of the Linux-Apache-My SQL-PHP (LAMP) environment. LAMP provides developers with a traditional two tiered application development platform. There is a database, and a "smart" web server able to communicate with the database. Clients only talk to the web server , while the web server transparently talks to the database when required. The following diagram illustrates how a typical LAMP server works.
Fig. Example architecture of LAMP
By combining these tools you can rapidly develop and deliver applications. Each of these tools is the best in its class and a wealth of information is available for the beginner. Because LAMP is easy to get started with yet capable of delivering enterprise scale applications the LAMP software model just might be the way to go for your next, or your first application. Let’ take a look at the parts of the system.

LINUX:

LINUX is presently the most commonly used implementation of UNIX. Built from the ground up as a UNIX work-alike operating system for the Intel 386/486/pentium family of chips by a volunteer team of coders on the internet LINUX has revitalized the old-school UNIX community and added many new converts. LINUX development is led by Linux Torvalds. The core of the system is the LINUX kernel. On top of the kernel a LINUX distribution will usually utilize many tools from the Free Software Foundation’s GNU project. LINUX has gained a huge amount of momentum and support, both from individuals and large corporations such as IBM. LINUX provides a standards compliant robust operating system that inherits the UNIX legacy for security and stability. Originally developed for Intel x86 systems LINUX has been ported to small embedded systems on one end of the spectrum on up to large mainframes and clusters. LINUX can run on most common hardware platforms.

APACHE:

Apache is the most popular web server on the Internet. Apache like LINUX, My SQL and PHP is an open source project. Apache is based on the NCSA (National Center for Super Computing Applications) web server. In 1995-1996 a group of developers coalesced around a collection of patches to the original NCSA web server. This group evolved into the Apache Software foundation. With the release of Apache 2.0 apache has become a robust well documented multi-threaded web server. Particularly appealing in the 2.0 release is improved support for non-UNIX systems. Apache can run on a large number of hardware and software platforms. Since 1996 Apache has been the most popular web server on the Internet. Presently apache holds 67% of the market.

MySQL:

MySQL is a fast flexible Relational Database. My SQL is the most widely used Relational Database Management System in the world with over 4 million instances in use. MySQL is high-performance, robust, multi-threaded and multi user. MySQL utilizes client server architecture. Today, more than 4 million web sites create, use, and deploy MySQL-based applications. MySQL’ focus is on stability and speed. Supports for all aspects of the SQL standard that do not conflict with the performance goals are supported.

Features include:

 Portability. Support for a wide variety of Operating Systems and hardware
 Speed and Reliability
 Ease of Use
 Multi user support
 Scalability
 Standards Compliant
 Replication
 Low TCO (total cost of ownership)
 Quality Documentation
 Dual license (free and non-free)
 Full Text searching
 Support for transactions
 Wide application support


PHP:


What's next in the field of web design? It's already here. Today's webmasters are deluged with available technologies to incorporate into their designs. The ability to learn everything is fast becoming impossibility. So your choice in design technologies becomes increasingly important if you don't want to be the last man standing and left behind when everyone else has moved on. But before we get to that, lets do a quick review of the previous generation of web design.
In the static generation of web design, pages were mostly html pages that relied solely on static text and images to relay they information over the internet. Here the web pages lacked x and y coordinate positioning, and relied on hand coded tables for somewhat accurate placement of images and text. Simple, and straight to the point, web design was more like writing a book and publishing it online.
The second generation of web design (the one we are in now), would be considered the ACTIVE generation. For quite a while now the internet has been drifting towards interactive web designs which allow users a more personal and dynamic experience when visiting websites. No longer is a great website simply a bunch of static text and images. A great website is now one which allows, indeed, encourages user interaction. No longer does knowing HTML inside out make you a webmaster, although that does help a great deal!! Now, knowing how to use interactive technologies isn't just helpful, it's almost a requirement. Here are a few of the interactive technologies available for the webmasters of today.

Technologies on the client side:
1. Active X Controls: Developed by Microsoft these are only fully functional on the Internet Explorer web browser .This eliminates them from being cross platform, and thus eliminates them from being a webmasters number one technology choice for the future. Disabling Active X Controls on the IE web browser is something many people do for security, as the platform has been used by many for unethical and harmful things.

2. Java Applets: Java Applets are programs that are written in the Java Language. They are self contained and are supported on cross platform web browsers. While not all browsers work with Java Applets, many do. These can be included in web pages in almost the same way images can.

3. Dhtml and Client-Side Scripting: DHTML, java script, and vbscript. They all have in common the fact that all the code is transmitted with the original webpage and the web browser translates the code and create pages that are much more dynamic than static html pages. Vbscript is only supported by Internet Explorer. That again makes for a bad choice for the web designer wanting to create cross platform web pages. With Linux and other operating systems gaining in popularity, it makes little sense to lock you into one platform.
Of all the client side options available java script has proved to be the most popular and most widely used; once your an expert with HTML.

Technologies on the server side:
1. CGI: This stands for Common Gateway Interface. It wasn't all that long ago that the only dynamic solution for webmasters was CGI. Almost every webserver in use today supports CGI in one form or another. The most widely used CGI language is Perl. Python, C, and C++ can also be used as CGI programming languages, but are not nearly as popular as Perl. The biggest disadvantage to CGI for the server side is in it's lack of scalability. Although mod_perl for Apache and Fast CGI attempt to help improve performance in this department, CGI is probably not the future of web design because of this very problem.
2. ASP: Another of Microsoft's attempt's to "improve" things. ASP is a proprietary scripting language. Performance is best on Microsoft's own servers of course, and the lack of widespread COM support has reduced the number of webmasters willing to bet the farm on another one of Microsoft's silver bullets.

3. Java Server Pages and Java Servlets: Server side java script is Nets capes answer to Microsoft's ASP technology. Since this technology is supported almost exclusively on the Netscape Enterprise Sever, the likelihood that this will ever become a serious contender in the battle for the webmaster's attention is highly unlikely.

4. PHP: PHP is the most popular scripting language for developing dynamic web based applications. Originally developed by Rasmus Lerdorf as a way of gathering web form data without using CGI it has quickly grown and gathered a large collection of modules and features. The beauty of PHP is that it is easy to get started with yet it is capable of extremely robust and complicated applications. As an embedded scripting language PHP code is simply inserted into an html document and when the page is delivered the PHP code is parsed and replaced with the output of the embedded PHP commands. PHP is easier to learn and generally faster than PERL based CGI. However, quite unlike ASP, PHP is totally platform independent and there are versions for most operating systems and servers.

The benefits of using PHP server side processing include the following:
 Reduces network traffic.
 Avoids cross platform issues with operating systems and web browsers.
 Can send data to the client that isn't on the client computer.
 Quicker loading time. After the server interprets all the php code, the resulting page is transmitted as HTML.
 Security is increased, since things can be coded into PHP that will never be viewed from the browser.


BROWSER:

Since all the tools are in place to deliver html content to a browser it is assumed that control of the application will be through a browser based interface. Using the browser and HTML as the GUI (Graphical User Interface) for your application is frequently the most logical choice. The browser is familiar and available on most computers and operating systems. Rendering of html is fairly standard, although frustrating examples of incompatibilities remain. Using html and html-form elements displayed within a browser is easier than building a similarly configured user interface from the ground up. If your application is internal you may want to develop for a specific browser and OS combination. This saves you the problems of browser inconsistencies and allows you take advantage of OS specific tools.

APPLYING LAMP:

1. Storing our data: Our data is going to be stored in the MySQL Database. One instance of MySQL can contain many databases. Since our data will be stored in MySQL it will be searchable, extendable, and accessible from many different machines or from the whole World Wide Web.
2. User Interface: Although MySQL provides a command line client that we can use to enter our data we are going to build a friendlier interface. This will be a browser-based interface and we will use PHP as the glue between the browser and the Database.
3.Programming: PHP is the glue that takes the input from the browser and adds the data to the MySQL database. For each action add, edit, or delete you would build a PHP script that takes the data from the html form converts it into a SQL query and updates the database.

4.Security: The standard method is to use the security and authentication features of the apache web server. The tool mod_auth allows for password based authentication. You can also use allow/deny directives to limit access based on location. Using one or both of these apache tools you can limit access based on who they are or where they are connecting from. Other security features that you may want to use would be mod_auth_ldap, mod_auth_oracle, certificate based authentication provided by mod_ssl.


When not to use LAMP?

Applications not well suited for LAMP would include applications that have a frequent need for exchanging large amounts of transient data or that have particular and demanding needs for state maintenance. It should be remembered that at the core http is a stateless protocol and although cookies allow for some session maintenance they may not be satisfactory for all applications. If you find yourself fighting the http protocol at every turn and avoiding the “url as a resource mapped to the file system” arrangement of web applications then perhaps LAMP is not the best choice for that particular application.

ADVANTAGES OF LAMP:

 Seamless integration with Linux, Apache and MySQL to ensure the highest levels of availability for websites running on LAMP.
 Full 32bit and 64bit support for Xeon, Itanium and Opteron-based systems runs on enterprise Linux distributions from Red Hat and SuSE.
 Supports Active/Active and Active/Standby LAMP Configurations of up to 32 nodes.
 Data can reside on shared SCSI, Fiber Channel, Network Attached Storage devices or on replicated volumes.
 Maximizes Ecommerce revenues, minimizes Ebusiness disruption caused by IT outages.
 Automated availability monitoring, failover recovery, and fail back of all LAMP application and IT-infrastructure resources.
• Intuitive JAVA-based web interface provides at-a-glance LAMP status and simple administration.
• Easily adapted to sites running Oracle, DB2, and PostgreSQL .
• Solutions also exist for other Linux application environments including Rational Clear Case, Send mail, Lotus Domino and my SAP.

CONCLUSION:
While Flash, Active X, and other proprietary elements will continue to creep in and entice webmasters, in the end, compatibility issues and price of development will dictate what eventually win out in the next generation of web design. However, for the foreseeable future PHP, HTML, and databases are going to be in the future of interactive web design, and that's where I'm placing my bets. Open Source continues to play an important role in driving web technologies. Even though Microsoft would like to be the only player on the field, Open Source, with its flexibility will almost certainly be the winner in the end. Betting the farm on LAMP (Linux, Apache, MySql, PHP) seems much wiser to me than the alternative (Microsoft, IIS, Asp) ... not to mention it's a much less expensive route to follow.

A NOVEL TECHNIQUE TO ENHANCE THE SECURITY IN SYMMETRIC KEY CRYPTOGRAPHY

ABSTRACT
Cryptography is the science of keeping private information private and safe. In today’s high-tech information economy the need for privacy is far greater. In this paper we describe a common model for the enhancement of all the symmetric key algorithm like AES, DES and the TCE algorithm. The proposed method combines the symmetric key and sloppy key from which the new key is extracted. The sloppy key is changed for a short range of packet transmitted in the network

INTRODUCTION

Code books and cipher wheels have given way to microprocessors and hard drives, but the goal is still the same: take a message and obscure its meaning so only the intended recipient can read it. In today's market, key size is increased to keep up with the ever-growing capabilities of today's code breakers. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. A standard cryptanalytic attack is to know some plaintext matching a given piece of cipher text and try to determine the key, which maps one to the other. This plaintext can be known because it is standard or because it is guessed. If text is guessed to be in a message, its position is probably not known, but a message is usually short enough that the cryptanalyst can assume the known plaintext is in each possible position and do attacks for each case in parallel. In this case, the known plaintext can be something so common that it is almost guaranteed to be in a message. A strong encryption algorithm will be unbreakable not only under known plaintext (assuming the enemy knows all the plaintext for a given cipher text) but also under "adaptive chosen plaintext" -- an attack making life much easier for the cryptanalyst. In this attack, the enemy gets to choose what plaintext to use and gets to do this over and over, choosing the plaintext for round N+1 only after analyzing the result of round N. For example, as far as we know, DES is reasonably strong even under an adaptive
chosen plaintext attack. Of course, we do not have access to the secrets of government cryptanalytic services. Still, it is the working assumption that DES is reasonably strong under known plaintext and triple-DES is very strong under all attacks.
To summarize, the basic types of cryptanalytic attacks in order of difficulty for the attacker, hardest first, are: Cipher text only: the attacker has only the encoded message from which to determine the plaintext, with no knowledge whatsoever of the latter. A cipher text only attack is usually presumed to be possible, and a code's resistance to it is considered the basis of its cryptographic security. Known plaintext: the attacker has the plaintext and corresponding cipher text of an arbitrary message not of his choosing. The particular message of the sender’s is said to be ‘compromised’.
In some systems, one known cipher text-plaintext pair will compromise the overall system, both prior and subsequent transmissions, and resistance to this is characteristic of a secure code. Under the following attacks, the attacker has the far less likely or plausible ability to ‘trick’ the sender into encrypting or decrypting arbitrary plaintexts or cipher texts. Codes that resist these attacks are considered to have the utmost security. Chosen plaintext: the attacker has the capability to find the cipher text corresponding to an arbitrary plaintext message of his choosing. Chosen cipher text: the attacker can choose arbitrary cipher text and find the corresponding decrypted plaintext. This attack can show in public key systems, where it may reveal the private key. Adaptive chosen plaintext: the attacker can determine the cipher text of chosen plaintexts in an interactive or iterative process based on previous results. This is the general name for a method of attacking product ciphers called ‘differential cryptanalysis. A common model for the enhancement of the existing symmetric algorithms has been proposed.

METHODOLOGY

Advantage of formulating mathematically:
In basic cryptology you can never prove that a cryptosystem is secure. A strong cryptosystem must have this property, but having this property is no guarantee that a cryptosystem is strong. In contrast, the purpose of mathematical cryptology is to precisely formulate and, if possible, prove the statement that a cryptosystem is strong. We say, for example, that a cryptosystem is secure against all (passive) attacks if any nontrivial attack against the system is too slow to be practical. If we can prove this statement then we have confidence that our cryptosystem will resist any (passive) cryptanalytic technique. If we can reduce this statement to some well-known unsolved problem then we still have confidence that the cryptosystem isn't easy to break. Other parts of cryptology are also amenable to mathematical definition. Again the point is to explicitly identify what assumptions we're making and prove that they produce the desired results. We can figure out what it means for a particular cryptosystem to be used properly: it just means that the assumptions are valid. The same methodology is useful for cryptanalysis too. The cryptanalyst can take advantage of incorrect assumptions.
Compression aids encryption by reducing the redundancy of the plaintext. This increases the amount of cipher text you can send encrypted under a given number of key bits. Nearly all-practical compression schemes, unless they have been designed with cryptography in mind, produce output that actually starts off with high redundancy. Compression is generally of value, however, because it removes other known plaintext in the middle of the file being encrypted. In general, the lower the redundancy of the plaintext being fed an encryption algorithm, the more difficult the cryptanalysis of that algorithm. In addition, compression shortens the input file, shortening the output file and reducing the amount of CPU required to do the encryption algorithm. Compression after encryption is silly. If an encryption algorithm is good, it will produce output, which is statistically in distinguishable from random numbers and no compression algorithm will successfully compress random numbers.

TRIANGULAR-CODED ENCRYPTION ALGORITHM:
According to the Triangular Algorithm while encryption, compression too is completed. Consider a triangle ABC sides ‘a’, ‘b’ and ‘c’ opposite respectively. ‘a’ and ‘b’ are the actual data and ‘c’, the cipher text. Angle ‘C’ is the symmetric key, which is used for both encryption and decryption in this algorithm. Angle ‘a’ keeps changing for different measurements of side ‘a’ and ‘b’. The level of encryption is increased to enhance the security of the cipher text.


Figure1. Triangle formed by the plain texts ‘a’ and ‘b’ with C and A as the angle.In the encryption phase, the transmitter knows the sides ‘a’, ‘b’ and the angle ‘C’. We get the cipher text, ‘c’ from the sides ‘a’ and ‘b’ and the angle ‘C’. The angle ‘A’ too is calculated from the parameters ‘a’, ‘b’ and ‘C’. ‘C’ and ‘A’ are the parameters to be transmitted. The formula used to calculate the cipher text, ‘c’ from the sides ‘a’, ‘b’ and the angle ‘C’ of the triangle is given below.



Where
a: plain text1
b: plain text2
C: the secret key
c: the cipher text

Where
A: varying angle
a: plain text1
c: cipher Text
C: secret key

Now in the decryption phase, the receiver knows the parameters ‘c’, ‘A’ and ‘C’, which are used to extract the actual data ‘a’ and ‘b’. So it is obvious that C is the known symmetric key by both the sender and receiver. But the side a, changes for the constant value of C. Naturally the angle A’ too changes.
B = 180 – (A+C)
Where
B: opposite angle of ‘b’
A: varying angle
C: secret key
Where
a: plain text1
c: cipher text
A: varying angle
C: secret key


Where
b: plain text2
c: cipher text
B: opposite angle of ‘b’
C: secret key

Thus the plain text ‘a’ and ‘b’ are retrieved by the above formula. The values of the plain text ‘a’ and ‘b’ are ound based on cipher text ‘c’, ‘C’ the secret key and A the varying angle.



THE CRYPT ANALYSIS:
The sum of angles in a Triangle is 180.
(i.e.) θ1 + θ2 + θ3 = 180
Since θof a particular side (which is opposite to the base) is considered to be the secret key. It can vary from 1 to 178 where other two sides will take 1 degree each when θ1 takes its maximum value.
Mθ<= (180 – 1 – 1)
If θ1 or the key takes 7 decimal parts the range between 1 and 2 will be 1 * 10 ^ 7 and the Range between 1 and n for 7 decimals will be as follows
Rn = n * 10 ^ 7
Rn = Range for n
PROPOSE MODEL (Universal Security Reinforcement Model):
The Sender and receiver should have one more key called Sloppy key in addition to their Conventional key. This Sloppy key is changed dynamically (Sk) based on the data contained in the Skth data transmitted over net. This key is then synthesized with a conventional encryption key ‘Symmetric key’ (Smk) and a Synergistic key (Sk) is created with the help of the Sloppy key generator, Ø.
Sk = Ø ((sk), Smk Vc)
Where,
Smk - symmetric key
sk - The new key
Vc - Validity Count
Ø - Sloppy Key Generator (this may be any operation like addition, subtract, log, sin, cos etc)
Smk is symmetric key(conventional key).
Sk is sloppy key
Lets we will take an example.
The Model works as illustrated.
Let the data to be transmitted is

21 52 43 15 75 26 17 28 99 10 45
94 72 03 62 96 92 63 34 20
38 19 45 30 28 52 92 51 80 23

Assume first new key is 4. then for first 4 data upto 15, the new key is 4.for eg.for 52, the new key is 4, symmetric key is say 5 means ,the sloppy key is calculated using 4and 5 (eg: addition). so sloppy is 9..for next 4 data , sloppy key is 9.Then next new key is 15.(at 4th position)...then for next 15 data, the new key is calculated same as before..
Then next new key is 63. (At 15th position).The process is repeated.
So block wise we are changing that sloppy key. If u want 2 reduce the block size, we have 2 set the validity count Vc. so that hacking is difficult.

CONCLUSION:
In summary, a common model was suggested for the enhancement of all the crypto algorithms including the TCE algorithm emphasized in this paper. The main intention of this paper is to reinforce the Security of all Existing algorithms using the above said methodology. This model can be implemented where privacy in cryptanalysis is of much importance. The key concept of this approach is, that a sloppy key (Sk) is generated along with the symmetric key (Smk). This Sloppy key (Sk) is determined using the key adjuster (φ). The significance of the key adjuster (φ) is the breaking of the existing key. As far as the range within the Validity counter (Vc) is decreased; the breaking of the sloppy key (Sk) is frequent. This arises difficulty in hacking.

CRYPTOGRAPHY IN SMART CARDS

CRYPTOGRAPHY IN SMART CARDS

In the age of universal electronic connectivity, of viruses and hackers there is indeed no time at which security does not matter. The issue of security and privacy is not a new one however, and the age-old science of cryptography has been in use, since people had some information that they wish to hide. Cryptography has naturally been extended into realm of computers, and provides a solution electronic security and privacy issue.
As the technology increases, Smart Cards (e.g.: SIM cards, Bank cards, Health cards) play very important role in processing many transactions with high level of security.
This security level achieved by means of Cryptography. In this paper we are presenting an introduction to






1. INTRODUCTION

Cryptography comes from the Greek words for – “secret writing”. Cryptography is the science of enabling secure communications between a sender and one or more recipients. It deals with a process associated with scrambling plain text (ordinary text, or clear text) into cipher text (a process called encryption) then back again (known as decryption).









Fig:Encryption model
An intruder is hacker or cracker who hears and accurately copies down the complete cipher text. Passive intruder only listens to the communication channel. But, active intruder can also record messages and play them back later, inject his own messages, or modify legitimate messages before they get to the receiver.



Cryptography concerns itself with four objectives:
1. Confidentiality (the information cannot be understood by any one for whom it was unintended)
2. Integrity (the information cannot be altered in storage or transit between sender and intended receiver without the alteration being detected).
3. Non-repudiation (the creator/sender of the information cannot deny at a later stage his or her intentions in the creation or transmission of the information).
4. Authentication (the sender and receiver can confirm each others identity and the origin/destination of the information).

2. TYPES OF ENCRYPTION
We have two variations
• Symmetric encryption
• Asymmetric encryption
In symmetric encryption, same key is used for both encryption and decryption. Consider a situation where Alice, a user from company A, is electronically communicating with Bob, a user of company B
In the figure of Symmetric communication between Alice and bob Alice would encrypt her message using a key, and then send a message to Bob. Alice would separately communicate the key to Bob to allow him to decrypt the message. To maintain security and privacy, Alice and Bob need to ensure that the key remains private to them.
Symmetric encryption can be implemented by
 DES – The Data Encryption Standard
 AES – The Advanced Encryption Standard
 Cipher modes
In Asymmetric encryption, separate keys are used for encryption and decryption

Fig: Asymmetric communication between Bob and Alice
Here, Alice is sending a message to Bob. Alice creates her message then encrypts it using Bob’s public key. When Bob receives the encrypted message, he uses his secret, private key to decrypt it. As long as Bob’s private key has not been
compromised then both Alice and Bob know that the message is secure.
Asymmetric Encryption can be implemented by
 RSA (Rivest, Shamir, Adleman)
Other public key Algorithms



3. APPLICATIONS OF CRYPTOGRAPHY:
The following are some of the applications of cryptography.
• Digital Signatures
• Digital Certificates.
• Message Digest.
• Secure Socket Layer.
• Secure E-Business
• Secure IP.
• Challenge/Response systems (Smart cards).
In this paper we are concentrating on Smart Cards.
4. SMART CARDS:
Smart cards are an ideal means to provide the required level of security. In recent years, smart card technology has quickly advanced and by now reached a state where smart cards easily integrate into public key infrastructures. Today's smart cards provide memory, and they have cryptographic coprocessors that allow them to generate digital signatures using the RSA.

a) Architecture:
A smart card is a credit card sized plastic card with an integrated circuit (IC) contained inside. The IC contains a microprocessor and memory, which gives smart cards the ability to process, as well as store more information.

Fig: Contact chip and Smart card architecture


The figure shows the architecture of smart card, which contains RAM, ROM, FLASH memory, and a Coprocessor. Smart cards uses RAM for temporary storage and ROM as a bootstrap for loading the operating system. FLASH memory allows much higher data storage capacity on the card. It has an on-chip dedicate Coprocessor called Crypto Processor with key generation and asymmetric algorithm acceleration.
Contact chip is a standard transistor that was created from a lithographic process as a series of etched and plated regions on a tiny sheet of silicon.
A smart card can be used for payment transactions, such as purchases, and non-payment transaction, such as information storage and exchange.

b) Role of Cryptography:
The smart card provides two types of security services user authentication and digital signature generation. Smart cards are specifically designed to perform these services with a high level of security. Authentication of users means proving that users are who they say they are. There are various ways to implement authentication using a smart card, but in this paper we are presenting smart cards with crypto processors.Smart cards data storage capability structure is comparable with directory structure of disk media.
The main structure is based on three component types:
• Master File (MF), the root directory
• Dedicated file (DF), application directories or sub-directories
• Elementary file (EF), data files.
On the smart card there is only one Master File that contains some data files with global information about the smart card and its holder.
Dedicated files are directories that can be set under the root directory. Each application has a directory of its own. An application directory can have one or more sub directories.
Each directory has some specific elementary files, which contains secret cryptographic keys. All Dedicated and Elementary files have access conditions to execute a command on a file.
c) Cryptographic computations by Smart Cards:
The maximal length of data that can be encrypted by the smart card and that is not stored on the smart card is 8 bytes. The command that provides the encryption is called INTERNAL AUTHENTICATION and is developed to authenticate the smart card to the outside world. The command requires a random number from the outside world and a secret key that is stored on the smart card. The random number is encrypted with a secret key by the smart card to access the information.
The smart card is also able to compute a Message Authentication Code (MAC) over data that is stored on the smart card. A MAC that is computed by the smart card is also called a stamp.
All data is stored unencrypted on a smart card. A smart card can encrypt data that is stored in specific files on the smart card. The encryption is possible for a file that has access condition ENC (ENCrypted) for the read command.
d) Storage of Secret keys on Smart Card
The architecture of smart cards allows storing secret cryptographic keys in safe manner. The stored keys can only be used to perform cryptographic computations but not for reading. The keys are stored in specific data files called EF_KEY. The initial secret keys are written on the smart card during the initialization process performed by the card issuer. To write a new secret key Knew on the smart card, secret keys are needed that are (already) stored in the smart card.
Smart card makes use of two kinds of secret keys
 Management key
 Operational key.
A management key is used to encrypt another management key or an operational key that have to be written on the smart card. A management key is also called a Key Encrypting Key (KEK).
An operational key is used by the smart card to perform data cryptographic operations

5. APPLICATIONS OF SMART CARD:
Smart cards are used for huge range of applications today. A few common examples of applications are briefly described here.

i) SIM cards:
A common application for Smart Cards is for mobile phones. The central security processor of a mobile phone is provided by a global system for mobile communication SIM (Subscriber Identity Module). The use of SIM cards has radically improved security of digital phones compared to the older analogue devices.


ii) Bank Cards:
Increasingly credit and debit cards are being used, using the contact chip rather than being swiped. The security feature offered by Smart Cards protect consumers from their cards being cloned as it is much more difficult to copy a chip protected cryptographically than a magnetic strip.
iii) Health Cards:
Increasingly, Smart Cards are being used to store a citizen’s medical data. The cards are carried by the citizen and can contain information such as list of allergies, current and past medications, past treatment history, disease history and doctors notes. This enables medical information to be easily accessed in an emergency.

Consider the scenario how a smart card works for banking.

Stage 1: This is the initial process where the enrollment of customer can takes place; the image and details of customer are saved on card.
 Evaluation Scenario of Smart cards
Stage 2: After the enrollment process money loaded and wallet value is updated.
Stage 3: When customer inserts the card for money, the system read the data from the card, to verify the validity of customer.
Stage 4: After verification the machine facilitates to credit or debit on the customer’s account. Finally the wallet value is updated.

6. MERITS AND DEMERITS:
High-level security can be achieved using cryptography in smart cards. Data present in the smart card is more secured and can be viewed only by the authorized persons only.
Although this system is very effective as protection, due to the large amount processing power needed to run this system it is impossible for use on older, slower computers without the necessary processing power to use such an extensive encryption system. Weak-authentication may break the security provided by the smart card.

7. CONCLUSION:
Cryptography provides a solution to the problem of security and privacy issues. The usage of cryptography in Smart Cards became very popular. Smart card technology can be implemented for multi-applications such as Bankcards, SIM cards, and Health cards.
As card technologies continue to develop we can expect to see advanced cards interacting directly with users through displays, biometric sensors and buttons. This will open up many exciting novel applications, and further increase the usability of Smart Cards.


Achieving higher QOS by GPRS, WLAN Integration

ABSTRACT:-
GPRS (General Packet Radio Service) is a packet based communication service for mobile devices that allows data to be sent and received across a mobile telephone network. GPRS is a step towards 3G and is often referred to as 2.5G. As the wireless technology evolves, one can access the Internet almost everywhere via many wireless access networks such as wireless LAN and GPRS. People would like to use the wireless networks with high data rate, large coverage and low cost. Some networks such as GPRS can provide large coverage, but they only provide low data rate; some networks like wireless LAN can provide high data rate, but the access points are not widely deployed. None of the wireless


Networks can meet all requirements of a mobile user. Heterogeneous networks solve parts of the problem. In heterogeneous networks, users can roam among different kind of networks such as 802.11 wireless LAN and GPRS through vertical handoffs. But in heterogeneous networks, each kind of wireless networks provide different quality of services. Users roaming among the wireless networks will suffer enormous change of quality of services. The paper proposed three access network selection strategies that keep mobile users staying in the wireless networks with higher quality services longer and thus improves the average available bandwidth and decreases the call blocking probability.


Introduction:

IEEE 802.11 wireless LAN is the most popular high data rate wireless network. But the coverage of an access point is too small, and the access points are not widely deployed and well organized. Users cannot receive the WLAN services ubiquitously and have to change their settings when they are in different WLAN.
On the other way, cellular systems like GPRS can provide services almost everywhere, but they cannot have a data rate like WLAN. Vertical handoffs in the heterogeneous works let users can get service from both GPRS and WLAN. Users who leave the coverage of an access point can vertically handover to the GPRS networks, and the Internet service. IEEE 802.11g has a 54 Mbps transmission rate while GPRS has only 171 kbps for optimal transmission rates for the users will not be terminated. The paper proposes new


Mobility strategies to extend the time mobile hosts staying in higher quality networks in the heterogeneous network environment by using ad hoc network. In an ad hoc network, mobile hosts relay messages for other mobile hosts. Such characteristic helps to extend the service range of an access point while there are mobile hosts available to form a path that are able to relay messages to the access point.
Interworking mechanisms:-



The integration of WLAN into GPRS will provide users in “hot-spot” areas to use the high-speed wireless network, and when outside a hot-spot coverage area, use the cellular data network. This is however not simple to implement as it must provide services such as: session continuity, integrated billing and authentication between networks, inter-carrier roaming, and most importantly, provide a seamless user experience.
Some Existing coupling methods:
1. Tight coupling methods:


In general, the proposed tight coupling architecture provides a novel solution for internetworking between 802.11 WLANs3 and GPRS, and features many benefits, such as:
• Seamless service continuation across WLAN and GPRS. The users are able to maintain their data sessions as they move from WLAN to GPRS and vice versa.
• Reuse of GPRS AAA.
• Reuse of GPRS infrastructure (e.g., core network resources, subscriber databases, billing systems) and protection of cellular operator’s investment.
• Support of lawful interception for WLAN subscribers.
• Increased security, since GPRS authentication and ciphering can be applied on top of WLAN ciphering.
• Common provisioning and customer care.
2.Loose Coupling Methods:


Loose coupling is another approach that provides internetworking between GPRS and WLAN. As can be seen, the WLAN network is coupled with the GPRS network in the operator’s IP network. Note that, in contrast to tight coupling, the WLAN data traffic does not pass through the GPRS core network but goes directly to the operator’s IP network.
Disadvantage of Existing Methods:


• After coupling between WLAN and GPRS Network cannot easily support third-party WLANs.
• Throughput capacities are very less.

• More important, tight coupling cannot support legacy WLAN terminals, which do not implement the GPRS protocols.
• Cost is more to implemententation.
The Proposed Strategies:


In the paper, the heterogeneous network is composed of WLAN, ad hoc WLAN and GPRS network. With the use of ad hoc WLAN network, mobile hosts can access Internet with others’ relaying to a WLAN AP. In original heterogeneous network environment, mobile hosts will prefer WLAN. But if no WLAN AP available, the mobile hosts will handover to the GPRS networks to keep the connections alive. With the use of ad hoc WLAN, mobile hosts have another alternative when there is no WLAN AP available. They can choose ad hoc WLAN. However, there may be more than one mobile host can relay packets to more than one access points. Mobile hosts may select one of the best relay mobile hosts, or decide not to use the ad hoc network. One of the best relay mobile hosts, or decides not to use the ad hoc network.


Mobile wireless network is the infrastructure less mobile network, commonly known as an ad hoc Network. Infrastructures less networks have no fixed routers. All nodes are capable of movement and can be connected dynamically in an arbitrary manner. Nodes of these networks function as routers which discover and maintain routes to other nodes in the network.
Selection strategies:-
Making such decisions will be a problem, and three selection strategies are proposed. The selection strategies are detailed below,
A. Fixed hop counts (FHC)
In the strategy, the ad hoc route cannot be longer than n hops; the mobile host first finds the access points, if no access point available, the mobile host will try to find a mobile host has a route shorter than n – 1 hops away from an access point. If more than one route shorter than n – 1 hops, select the shortest one. If more than one route is the shortest hop counts, select the AP has same IP range with itself. If no AP has same IP range, select arbitrary one. If no route is shorter than n – 1 hops, try to select GPRS network.


B. Any available route (AAR)
In the strategy, any ad hoc route will be chosen if there are no higher service networks available, the mobile host will try to find a mobile host that has a shortest route to an access point. If no route is available, try to select GPRS network.
C. Bandwidth pre-evaluation (BPE)
In the third strategy, the network status will be measured before selection; ad hoc networks will be select only if they have a higher quality of service than the GPRS network. In the proposed strategy, when a mobile host tries to initiate a call, it will look for WLAN AP, ad hoc WLAN relay host and GPRS networks sequently. And if none of the network can be selected, the connection is rejected. When a user leaves the coverage of a GPRS cell or an access point, a handoff occurred. The cases are more complicated than call initiation, and we discussed the three cases separately.


Call initiation in network:-

In the proposed strategy, when a mobile host tries to initiate a call, it will look for WLAN AP, ad hoc WLAN relay host and GPRS networks sequently. And if none of the network can be selected, the connection is rejected. When a user leaves the coverage of a GPRS cell or an access point, a handoff occurred. The cases are more complicated than call initiation, and we discussed the three cases separately.
A. Handoff from WLAN:-
First, try to find another WLAN AP. If no other AP is available, try to select an ad hoc WLAN network. And if no ad hoc WLAN is qualified, try to select the GPRS network. Finally, if no GPRS network is available, the connection will be forced terminated.
B. Handoff from ad hoc WLAN:-
First, try to find a WLAN AP. If no AP is available, try to select an ad hoc WLAN network. And if no ad hoc WLAN is qualified, try to select the GPRS network. Finally, if no GPRS network is available, the connection will be forced terminated.
C. Handoff from GPRS:-
First, try to find another GPRS base station. If no other base station is available, try to find a WLAN AP. If no AP is available, try to select an ad hoc WLAN network. And if no ad hoc WLAN is qualified, the connection will be forced terminated.
Conclusions:-
Proposed strategies can reduce the times a user changes his/her IP address. The advantage disappears with the increase of mobility, because the route cannot be maintained in a high mobility network. Here, three mobility strategies are proposed to improve the service quality for mobile hosts in heterogeneous networks by using ad hoc routing. Using the proposed strategies, the average available bandwidth can be two times more than no strategy applied, and the request-blocking rate can have a 94% reduction at most and a 50% reduction in average. The change of IP address is a serious problem for mobile users, and the proposed strategies can have a 9% improvement in the times of IP address changing. It helps to ease the impact of the mobile IP protocols to the real time applications.
However, the drawback of the ad hoc networks is inherited in the proposed strategies. The handoff opportunity rises due to the unstable of relaying host. This can be prevented by using an ad hoc routing protocol that considered the stability or reducing the length of an ad hoc route.