Monday, March 29, 2010

Network Security Cryptography



Network Security  Cryptography


This paper tries to present an insight into cryptography, the ways of implementing it, its uses and implications. Cryptography, the art and science of secret codes has been existent right from the advent of human civilization; it has been used to transmit messages safely and secretively across groups of people so that their adversaries did not get to know their secrets. As civilizations progressed more and more complex forms of cryptography came into being, they were now not only symbolic representations in an unrecognizable form but complex mathematical transforms carried out on the messages. In the present day world cryptography plays a major role in safe transmission of data across the Internet, and other means of communications.
In this paper we have dealt with examples of how different crypto algorithms are implemented, and have tried to cite some of the most used crypto algorithms, like DES- the data encryption standard, RSA, IDEA, RC#4, etc. We have also dealt with some of the applications of these algorithms like link encryption, pretty good privacy, public key cryptography, PEM etc. We have also cited some methods of code-breaking or cryptanalysis like the mathematical attack, the brute force attack and the power analysis.
Cryptography
If you want something to stay a secret, don't tell anyone, don't write it down. If you do have to send it to someone else, hide it in another message so that only the right person will understand. Many creative methods of hiding messages have been invented over the centuries. Cryptography can be defined as the art and science of secret codes. It is a collection of techniques that transform data in ways that are difficult to mimic or reverse by some one who does not know the secret. These techniques involve marking transforming and reformatting the messages to protect them from disclosure, change or both. Cryptography in the computer age basically involves the translation of the original message into a new and unintelligible one by a mathematical algorithm using a specific "key". People mean different things when they talk about cryptography. Children play with toy ciphers and secret languages. However, these have little to do with real security and strong encryption. Strong encryption is the kind of encryption that can be used to protect information of real value against organized criminals, multinational corporations, and major governments. Strong encryption used to be only military business; however, in the information society it has become one of the central tools for aintaining privacy and confidentiality.
Why do we need cryptography?
The art of long distance communication has been mastered by civilizations many centuries ago. The transmission of secret political or confidential information was a problem ever since. To solve this
problem to some extent, secret codes were developed by groups of people who had to carry out such kind of secretive communications. These codes were designed to transform words into code words using some basic guide lines known only to their members. Now messages could be sent or received with a reduced danger of hacking or forgery as the code breaker would have to struggle really hard to break the code.
As time progressed and radio, microwave and internet communication developed, more complex and safer codes started to evolve. The traditional use of cryptography was to make messages unreadable to the enemy during wartime. However the introduction of the computing age changed this perspective dramatically. Through the use of computers, a whole new use for information hiding was evolved. Around the early 1970's the private sector began to feel the need for cryptographic methods to protect their data. This could include 'sensitive information' (corporate secrets), password files or personal records.
Needfor Cryptography
Some day to day examples
Encryption technology is used nowadays in almost any of the digital communication systems. For example, the most common one is the satellite T.V or the cable T.V. All the signals are available in the air, but the programs can be viewed only by those subscribers who have made the payment. This is done by a simple password security system. The subscriber gets an authenticated password on payment and can use it only for the time he has paid up after which it gets lapsed. Another common application of the encryption technique is the ATM card. Here also the transaction is done only on the acceptance of a secure and authenticated password. The mobile phones and for that matter even the internet connections are based on small scale cryptographic techniques.
Crypto algorithm
The crypto algorithm specifies the mathematical transformation that is performed on data to encrypt or decrypt it. A crypto algorithm is a procedure that takes the plain text data and transforms it in to cipher text in a reversible way. A good algorithm produces cipher text that yields very few clues about either the key or the plain text that produced it. Some algorithm are for stream ciphers which encrypt a digital data stream bit by bit .The best known algorithm are for the block ciphers which transform data in fixed sized blocks one at a time.
• Stream ciphers
Stream cipher algorithm is designed to accept a crypto key and a stream of plain text to produce a stream of cipher text.
• Block cipher
Block ciphers are designed to take data blocks of a specific size, put them with a key of a particular size and yield a block of cipher text of a certain size. Block ciphers are
analyzed and tested for their ability to encrypt data blocks of their given block size. A reasonable cipher should generate a cipher text that has as few noticeable properties as possible. A statistical analysis of cipher text generated by the block cipher algorithm should find that individual data bits as well as patterns of bits appear completely random. Non random patterns are the first thing for which a code breaker looks as they usually provide the entering wedge needed to crack a code.
Cipher modes
The term cipher mode refers to a set of techniques used to apply to a block cipher to a data stream. Several modes have been developed to disguise repeated plaintext blocks and improve security of the block cipher. Each mode defines a method of combining the plaintext , crypto key, and encrypted cipher text in a special way to generate the stream of cipher text actually transmitted to the recipient In theory there could be countless different ways of combining and feeding back the inputs and outputs of a cipher. In practice, four basic modes are used.
• Electronic Code Book (ECB)
It is the simplest of all the modes .The cipher is simply applied to the plaintext block by block it is the most efficient mode. It can be speedup by using parallel hardware and unlike other modes, does not require an extra data word for seeding a feed back loop. However a block of padding may be needed to guarantee that full blocks are provided for encryption and decryption. ECB has security problems in the sense that repeated plain text blocks yield repeated cipher text blocks.
• Cipher Block Chaining (CBC)
This mode hides patterns in the plaintext block by systematically combining each plaintext block with a cipher text block before actually encrypting it the two blocks are combined bit by bit using the
exclusive or operation. In order to guarantee that there is always some random looking cipher text to apply to the actual plaintext the process is started with a block of random bits called the initialization vector. Two messages will never yield the same cipher text even if the plain texts are identical as long as the initialization vector is different. In most applications the initialization vector is added at the beginning of message in plain text. A shortcoming of CBC is that encrypted messages may be as many as two blocks longer than the same message in ECB mode. One of the blocks is added to transmit the initialization vector to the recipient. Proper decryption depends on the initialization vector to start the feedback process. The other block is added as padding so that a full block is always encrypted or decrypted.
• CFB - Cipher Feedback mode
CFB is similar to CBC in that, it feeds the cipher text block back to the block cipher. However it is different because the block cipher doesn’t directly encrypt the plain text. Instead it is used to generate a constantly varying key that encrypts the plain text with a vernam cipher. In other words blocks of cipher text are exclusive or’ed with successive blocks of data generated by the block cipher to generate the cipher text. This mode is also called the cipher text auto key (CTAK).The advantage with this method is that it is not limited to the cipher block size. This mode can be adapted to work with smaller blocks down to bits. Like CBC however it needs an initialization vector to be sent for decryption.
• OFB - Output Feedback
It is similar to CFB but it is simpler. It uses the block cipher all by itself to generate the vernam keys. The key stream doesn’t depend on the data stream at all. Here the block cipher has nothing to do with processing the message .It is only used to generate the keys. This mode is also called auto key mode. The advantage is that like CFB length of the plain text need not have to fit into block boundaries, also each block requires the initialization vector and doesn’t depend on the data stream, so the decryption key stream can be prepared and kept at the receivers end with the knowledge of the key and the initialization vector.
Crypto Algorithms
1. DES
This is a widely used algorithm. It was developed by IBM (previously Lucifer) and was adopted as an official Federal Information Processing Standard (FIPS PUB 46) in 1976.This algorithm uses a 64 bit key (including 8 parity bits +56 key bits), converting 64 bit blocks of plaintext in to 64 bit blocks of code (block cipher method).This is done by putting the original text through a series of permutations and substitutions. The results are then merged with the original plain text using an XOR operation. This encryption sequence is repeated 16 times using a different arrangement of the key bits each time.
2. One time pads
A one-time pad is a very simple yet completely unbreakable symmetric cipher. That is it uses the same key for encryption as for decryption. As with all symmetric ciphers, the sender must transmit the key to the recipient via some secure channel, otherwise the recipient won't be able to decrypt the cipher ext. The key for a one-time pad cipher is a string of random bits, usually generated by a cryptographically strong pseudo-random number generator (CSPRNG).With a one-time pad, there are as many bits in the key as in the plaintext. This is the primary drawback of a one-time pad, but it is also the source of its perfect security. It is essential that no portion of the key should ever be used for another encryption (therefore the name "one-time pad"), otherwise cryptanalysis can break the cipher. The algorithm is very simple for example an ex-or operation between the plain text and the key, the same ex-or operation would also give back the plain text.
Ciphertext = plaintext (+) key
Plaintext = ciphertext (+) key
However the security of the one time pad is dependant upon the randomness of the generated key. The code is supposed to be safe even from brute force attack, running the text through all possible keys, because equal number of probably correct plaintext messages would be generated.
3. Triple DES
Triple encoding makes DES encoded text even more secure. It is equivalent to having a 112 bit key length. However, triple DES is significantly slower than commercial alternatives with similar key lengths.
4. Rivest Cipher #4
Rc4 is a symmetrical stream cipher developed by Ron Rivest. It has a key whose size can be varied according to the level of security required, generally it can be used with a 128 bit key length. This algorithm is fairly immune to differential crypt analysis but since it is used with short key lengths it is vulnerable to brute force cracking.
5. Idea
Idea is an algorithm which appeared in 1990. It was developed at the Swiss federal institute of technology. Its security is based not on hiding the algorithm but on keeping a secret key. Its key is 128 bit long key which makes it more attractive than DES, and it can be used with the usual block ciphers. This algorithm is publicly available and easy to implement. It is suitable for e-commerce, it can be exported and used world wide. Of late none of the cryptanalysis techniques have worked against IDEA. A brute force attack (with its 128 bit key length) would take trying 1 billion keys/sec for over a billion yrs would still not find the key.
6. Skip Jack
It is a block encryption algorithm developed by NSA (National security agency –USA). It encrypts 64 bit blocks using an 80 bit key. The usual block ciphers can be used to implement it to encrypt streams of data. It is provided in prepackaged encryption chipsets and in the Fortezza crypto card, a pc card containing a crypto processor and storage for keying material. The disadvantage of skipjack is that it is very less publicly known {reportedly to keep NSA’s design techniques secret}.It is fairly resistant to differential cryptanalysis and other short cut attacks. NSA’s skipjack is being promoted to protect military communications in the Defense Messaging System (DMS) which reflects a measure of confidence that skipjack is secure.
7. RSA public key algorithm
The best known and most popular embodiment of the public key idea is the RSA also named after its inventors Ronald Rivest, Adi Shamir and Leonard Adleman. The high level of security the RSA algorithm offers derives from the difficulty of decomposing large integers in to prime factors. Two primes which when multiplied by one another give the original number. Prime factoring of very large numbers is an important field in the number theory .One of the drawbacks with RSA algorithm compared with symmetrical methods is that encrypting and decrypting messages takes much more computing power .The fastest RSA chip now in existence can only manage a through put of 600 k bits when using 512 b it primes. Comparable DES hardware implementations are anything from 1000 to 10000 times faster. At present Des software implementation can encrypt around 100 times faster than the RSA algorithm. Crypt analysis can be done by factorizing the key in to two primes, estimates for factoring a 512 bit key show that computer system running at a million operations a sec (1 MIPS) and using current algorithms would take 420000 years to find the prime factors involved.
8. AES
The AES is a new algorithm that has now replaced DES as the new standard in the NIST. The Advanced Encryption Standard (AES) provides a better combination of safety and speed than DES. Using 128-bit secret keys, AES offers higher security against brute-force attack than the old 56-bit
DES keys, and AES can use larger 192-bit and 256-bit keys, if necessary. AES is a block cipher, and encrypts data in fixed-size blocks, but each AES cycle encrypts 128 bits-twice the size of DES blocks. While DES was designed for hardware, AES runs efficiently in a broad range of environments, from programmable gate arrays, to smart cards, to desktop computer software and browsers. In 2000, NIST selected Rijndael, an encryption algorithm developed by two Belgian cryptographers, as the new AES. There are a few products that already use the Rijndael algorithm, notably Unix's NetBSD open-source version. Rijndael has also appeared as an option in several desktop file-encryption programs. The AES is expected to be the FIPS (Federal information processing standards) quite soon.
Internet cryptography techniques (Applications of the crypto algorithms)
• Point-to-point link encryption
• IP link encryption
• A virtual private network(VPN) constructed with IP security protocol routers
• A VPN constructed with IPSEC firewalls
• Public key algorithm with Pretty Good Privacy(PGP)
• E-mail with privacy enhanced mail (PEM)
• Water marking.
• Point-to-point link encryption
This produces a fully isolated connection between a pair of computers by applying crypto to the data link. It yields the highest security by being the most restrictive in physical and electronic access. It is not necessarily an internet solution since it doesn’t need to use TCP/IP software. It is the simplest design, but the most expensive to implement and extend.
• IP link encryption
This produces a highly secure extensible TCP/IP network by applying crypto to the data link and by restricting physical access to hosts on the network. This architecture blocks communication with untrusted hosts and sites. Sites use point to point interconnections and apply encryption to all traffic on those interconnections.
• VPN construction with IP security
This is a virtual private newt that uses the internet to carry traffic between trusted sites. Crypto is applied at the internet layer using IPSEC. This approach uses encrypting routers and doesn’t provide the sites with access to untrusted internet sites.
• VPN construction with IPSEC firewalls
This is a different approach to the VPN that uses encrypting firewalls instead of encrypting routers. Crypto is still applied at the internet layer using IPSEC (IP security protocol).The firewalls encrypt all traffic between trusted sites and also provide control access to untrusted hosts. Strong firewall access control is necessary to reduce the risk of attacks on the crypto mechanisms as well as attacks on hosts within the trusted sites.
Digital signature
Digital signatures can be used to check the authenticity of the author of the message using the above mentioned technique. In 1991the national institute of standards and technology (NIST) decided on a standard for digital signatures, DSS (digital signature standard). DSS proposes an algorithm for digital signatures (DSA, digital signatures algorithm) although this is not based on the RSA but on the public key implementation of the “discrete logarithm problem” (what value must the exponent x assume to satisfy y= g ^x mod p where p is a prime. While the problem underlying this method is just as hard to solve as RSA’s prime factor decomposition, many people have claimed that DSA’s security is not perfect. After massive criticism its length was finally increased from 512 to 1024 bits. DSS is expected to become an official standard for US Govt. bureaus in not too distant future.
• PEM
PEM is the standard for encrypting messages on the internet’s mail service. It uses both RSA public key method and the symmetrical DES method to send a file in encrypted form, it is first encrypted using a randomly generated DES key generated using a DES algorithm. The DES key itself is then encoded with the recipient’s public key on the RSA system and sent along with the DES encoded file. The advantage of this is that only a small part of the message, the DES key has to be encoded using the time consuming RSA algorithm. The contents of the message itself are encrypted much faster using the DES algorithm alone.
• Message Digests
There is one more important encryption technique worth mentioning and that is the one way function. It is basically a non reversible quick encryption. In other words encrypting is easy but decrypting is not. While encryption could take a few seconds, decryption could take 100s or 1000s or millions of years even for the most powerful computers. These are used basically to test the integrity of a document or file by generating a digital fingerprint using special hash functions on the document. Assume that you have a document to send someone or store for the future and you need a way to prove at sometime that the document has been altered. You run a one way function which produces a fixed length value called a hash (also called a message digest). The hash is a unique signature of a document that you can keep and send with the document. The recipient can run the same one way function to produce a hash that should match the one you sent with the document. If the hashes don’t match the document has been altered or corrupted.
• Water marking
A watermark is that which is actually imperceptibly added to the cover-signal in order to convey the hidden data. It is used to protect the copy rights of the author on the internet. A water mark is a hidden file consisting of either a picture or data that gets copied with the document when ever it is downloaded from the web, and because of this the article cannot be unauthorizedly copied or distributed.
Latest crypto techniques
The policy about regulating technology ends up being obsolete by technological innovations. Trying to regulate confidentiality by regulating encryption closes one door and leaves two open steganography and winnowing.
• Steganography
An encrypted message looks garbage like, and alert people that there is something to hide. But what if the message is totally innocuous looking? This is an old trick that started centuries ago with writing in ink that is invisible until the paper has been heated. The microdot, a piece of film containing a very highly reduced image of the secret message and embedded in the punctuation marks of a normal document, was invented during World War II.. For example if you used the least significant bit of each pixel in a bitmap image to encode a message the impact on the appearance of the image would not be noticeable. This is known as steganography, or covered writing. A 480 pixel wide by 100 pixel high image - smaller than many WWW home page banners, could theoretically contain a message of more than 5,000 characters. The encoding is quite easy with a computer - and no complicated mathematics at
all. And of course the same principles apply to audio and video files as well. The image can be used simply as a carrier, with the message being first encrypted.
• Winnowing and Chaffing
Just as the name suggests the above technique believes in adding chaff (garbage data) to the wheat (message) before sending the message and then winnowing or removing the chaff from the wheat at the receiver. Since winnowing does not use encryption it is not affected by the regulations on crypto products. The message is first broken into packets and then each packet is mac’ed using a mac program such as (HMAC-SHA1). This is very similar to running the program through a hash function. Then chaff is added (chaffing) to the packets of mac’ed data before it is sent. At the receiving end only those packets are accepted that produce the same mac (showing that no changes have been made) and then the chaff is removed, this is called winnowing.
Cryptanalysis
There are many kinds of cryptanalytic techniques:
1) Differential cryptanalysis.
2) Linear cryptanalysis.
3) Brute force cracking
4) Power analysis
5) Timing analysis, etc.
Cryptographers have traditionally analyzed the security of ciphers by modeling crypto algorithms as ideal mathematical objects. A modern cipher is conventionally a black box that accepts plaintext inputs and provides cipher text outputs. Inside this box, the algorithm maps the inputs to the outputs using a predefined function that depends on the value of a secret key. The black box is described mathematically and formal analysis is used to examine the systems security. In a modern cipher an algorithms security rests solely on the concealment of the secret key. Thus attack strategies often reduce to methods that can expose the value of the secret key. Unfortunately hardware implementations of the algorithm can leak information about the secret key, which the adversaries can use.
Mathematical attacks
Techniques such as differential and linear cryptanalysis introduced in early 1990s are representative of traditional mathematical attacks. Differential and linear cryptanalysis attacks work by exploiting statistical properties of crypto algorithms to uncover potential weaknesses. These attacks are not dependent on a particular implementation of the algorithm but on the algorithm itself, therefore they can be broadly applied. Traditional attacks however require the acquisition and manipulation of large amounts of data. Attacks that exploit weaknesses in a particular implementation are an attractive alternative and are often more likely to succeed in practice.
Implementation attacks
The realities of a physical implementation can be extremely difficult to control and often result in unintended leakage of side channel information like power dissipation, timing information, faulty outputs etc. The leaked information is often correlated to the secret key, thus enemies monitoring the information may be able to learn the secret key and breach the security of the crypto system. Algorithms such as DES, RSA which are now being implemented in smart cards also are under a considerable threat. Smart cards are often used to store crypto keys and execute crypto algorithms. Data on the card is also stored using cryptographic techniques.
Power consumption is one of the potential side channel information; generally supplied by an external source it can be directly observed. All calculations performed by the smartcard operate on logical 0s or 1s. Current technological constraints result in differential power consumptions when manipulating a logic one or a logic zero, based on a spectral analysis of the power curve or based on the timing between the one and the zero the secret code can be cracked by the adversaries.
Countermeasures
Many countermeasures are being worked out to prevent implementation attacks such as power analysis or timing analysis. These attacks are normally based on the assumption that the operations being attacked are occurring at fixed intervals of time. If the operations are randomly shifted in time then statistical analyisis of side channel information becomes very difficult. Another side of the coin is that the hardware implementations must be carefully designed so that they do not leak any side channel information. Hard ware design methodologies are often difficult to design, analyse and test, hence software methods of introducing delay or data masking are the only easy methods to overcome this problem.
Conclusion
The internet has brought with it an unparalleled rate of new technology adoption. The commercially established, the industry and the armed forces would need an assortment of cryptographic products and other mechanisms to provide privacy, authentication, message integrity and trust to achieve their missions. These mechanisms demand procedures, policies and law. However, cryptography is not an end unto itself but the enabler of safe business and communication. Good cryptography and policies are therefore as essential for the future of internet based communications as the applications that utilize them.

Tuesday, March 09, 2010

WEB TECHNOLOGY IN LAMP TECHNOLOGY




WEB TECHNOLOGY IN LAMP TECHNOLOGY


LAMP is a shorthand term for a web application platform consisting of Linux, Apache, My SQL and one of Perl or PHP. Together, these open source tools provide a world-class platform for deploying web applications. Running on the Linux operating system, the Apache web server, the My SQL database and the programming languages, PHP or Perl deliver all of the components needed to build secure scalable dynamic websites. LAMP has been touted as “the killer app” of the open source world.

With many LAMP sites running Ebusiness logic and Ecommerce site and requiring 24x 7 uptime, ensuring the highest levels of data and application availability is critical. For organizations that have taken advantage of LAMP, these levels of availability are ensured by providing constant monitoring of the end-to-end application stack and immediate recovery of any failed solution components. Some also supports the movement of LAMP components among servers to remove the need for downtime associated with planned system maintenance.

The paper gives an overview of LINUX, APACHE, MYSQL, and mainly on PHP and its advantage over other active generation tools for interactive web design and its interface with the advanced database like MYSQL and finally the conclusion is provided.








CONTENTS


 Introduction
 Linux
 Apache
 My SQL
 Feature included in my sql
 PHP
 Technologies on the client side
 Technologies on the server side
 The benefits of using PHP server side processing
 Browser and its issues
 Applying LAMP
 When not on to use LAMP?
 Advantages of LAMP
 Conclusion


















INTRODUCTION:
One of the great "secrets" of almost all websites (aside from those that publish static .html pages) is that behind the scenes, the web server is actually just one part of a two or three tiered application server system. In the open source world, this explains the tremendous popularity of the Linux-Apache-My SQL-PHP (LAMP) environment. LAMP provides developers with a traditional two tiered application development platform. There is a database, and a "smart" web server able to communicate with the database. Clients only talk to the web server , while the web server transparently talks to the database when required. The following diagram illustrates how a typical LAMP server works.
Fig. Example architecture of LAMP
By combining these tools you can rapidly develop and deliver applications. Each of these tools is the best in its class and a wealth of information is available for the beginner. Because LAMP is easy to get started with yet capable of delivering enterprise scale applications the LAMP software model just might be the way to go for your next, or your first application. Let’ take a look at the parts of the system.

LINUX:

LINUX is presently the most commonly used implementation of UNIX. Built from the ground up as a UNIX work-alike operating system for the Intel 386/486/pentium family of chips by a volunteer team of coders on the internet LINUX has revitalized the old-school UNIX community and added many new converts. LINUX development is led by Linux Torvalds. The core of the system is the LINUX kernel. On top of the kernel a LINUX distribution will usually utilize many tools from the Free Software Foundation’s GNU project. LINUX has gained a huge amount of momentum and support, both from individuals and large corporations such as IBM. LINUX provides a standards compliant robust operating system that inherits the UNIX legacy for security and stability. Originally developed for Intel x86 systems LINUX has been ported to small embedded systems on one end of the spectrum on up to large mainframes and clusters. LINUX can run on most common hardware platforms.

APACHE:

Apache is the most popular web server on the Internet. Apache like LINUX, My SQL and PHP is an open source project. Apache is based on the NCSA (National Center for Super Computing Applications) web server. In 1995-1996 a group of developers coalesced around a collection of patches to the original NCSA web server. This group evolved into the Apache Software foundation. With the release of Apache 2.0 apache has become a robust well documented multi-threaded web server. Particularly appealing in the 2.0 release is improved support for non-UNIX systems. Apache can run on a large number of hardware and software platforms. Since 1996 Apache has been the most popular web server on the Internet. Presently apache holds 67% of the market.

MySQL:

MySQL is a fast flexible Relational Database. My SQL is the most widely used Relational Database Management System in the world with over 4 million instances in use. MySQL is high-performance, robust, multi-threaded and multi user. MySQL utilizes client server architecture. Today, more than 4 million web sites create, use, and deploy MySQL-based applications. MySQL’ focus is on stability and speed. Supports for all aspects of the SQL standard that do not conflict with the performance goals are supported.

Features include:

 Portability. Support for a wide variety of Operating Systems and hardware
 Speed and Reliability
 Ease of Use
 Multi user support
 Scalability
 Standards Compliant
 Replication
 Low TCO (total cost of ownership)
 Quality Documentation
 Dual license (free and non-free)
 Full Text searching
 Support for transactions
 Wide application support


PHP:


What's next in the field of web design? It's already here. Today's webmasters are deluged with available technologies to incorporate into their designs. The ability to learn everything is fast becoming impossibility. So your choice in design technologies becomes increasingly important if you don't want to be the last man standing and left behind when everyone else has moved on. But before we get to that, lets do a quick review of the previous generation of web design.
In the static generation of web design, pages were mostly html pages that relied solely on static text and images to relay they information over the internet. Here the web pages lacked x and y coordinate positioning, and relied on hand coded tables for somewhat accurate placement of images and text. Simple, and straight to the point, web design was more like writing a book and publishing it online.
The second generation of web design (the one we are in now), would be considered the ACTIVE generation. For quite a while now the internet has been drifting towards interactive web designs which allow users a more personal and dynamic experience when visiting websites. No longer is a great website simply a bunch of static text and images. A great website is now one which allows, indeed, encourages user interaction. No longer does knowing HTML inside out make you a webmaster, although that does help a great deal!! Now, knowing how to use interactive technologies isn't just helpful, it's almost a requirement. Here are a few of the interactive technologies available for the webmasters of today.

Technologies on the client side:
1. Active X Controls: Developed by Microsoft these are only fully functional on the Internet Explorer web browser .This eliminates them from being cross platform, and thus eliminates them from being a webmasters number one technology choice for the future. Disabling Active X Controls on the IE web browser is something many people do for security, as the platform has been used by many for unethical and harmful things.

2. Java Applets: Java Applets are programs that are written in the Java Language. They are self contained and are supported on cross platform web browsers. While not all browsers work with Java Applets, many do. These can be included in web pages in almost the same way images can.

3. Dhtml and Client-Side Scripting: DHTML, java script, and vbscript. They all have in common the fact that all the code is transmitted with the original webpage and the web browser translates the code and create pages that are much more dynamic than static html pages. Vbscript is only supported by Internet Explorer. That again makes for a bad choice for the web designer wanting to create cross platform web pages. With Linux and other operating systems gaining in popularity, it makes little sense to lock you into one platform.
Of all the client side options available java script has proved to be the most popular and most widely used; once your an expert with HTML.

Technologies on the server side:
1. CGI: This stands for Common Gateway Interface. It wasn't all that long ago that the only dynamic solution for webmasters was CGI. Almost every webserver in use today supports CGI in one form or another. The most widely used CGI language is Perl. Python, C, and C++ can also be used as CGI programming languages, but are not nearly as popular as Perl. The biggest disadvantage to CGI for the server side is in it's lack of scalability. Although mod_perl for Apache and Fast CGI attempt to help improve performance in this department, CGI is probably not the future of web design because of this very problem.
2. ASP: Another of Microsoft's attempt's to "improve" things. ASP is a proprietary scripting language. Performance is best on Microsoft's own servers of course, and the lack of widespread COM support has reduced the number of webmasters willing to bet the farm on another one of Microsoft's silver bullets.

3. Java Server Pages and Java Servlets: Server side java script is Nets capes answer to Microsoft's ASP technology. Since this technology is supported almost exclusively on the Netscape Enterprise Sever, the likelihood that this will ever become a serious contender in the battle for the webmaster's attention is highly unlikely.

4. PHP: PHP is the most popular scripting language for developing dynamic web based applications. Originally developed by Rasmus Lerdorf as a way of gathering web form data without using CGI it has quickly grown and gathered a large collection of modules and features. The beauty of PHP is that it is easy to get started with yet it is capable of extremely robust and complicated applications. As an embedded scripting language PHP code is simply inserted into an html document and when the page is delivered the PHP code is parsed and replaced with the output of the embedded PHP commands. PHP is easier to learn and generally faster than PERL based CGI. However, quite unlike ASP, PHP is totally platform independent and there are versions for most operating systems and servers.

The benefits of using PHP server side processing include the following:
 Reduces network traffic.
 Avoids cross platform issues with operating systems and web browsers.
 Can send data to the client that isn't on the client computer.
 Quicker loading time. After the server interprets all the php code, the resulting page is transmitted as HTML.
 Security is increased, since things can be coded into PHP that will never be viewed from the browser.


BROWSER:

Since all the tools are in place to deliver html content to a browser it is assumed that control of the application will be through a browser based interface. Using the browser and HTML as the GUI (Graphical User Interface) for your application is frequently the most logical choice. The browser is familiar and available on most computers and operating systems. Rendering of html is fairly standard, although frustrating examples of incompatibilities remain. Using html and html-form elements displayed within a browser is easier than building a similarly configured user interface from the ground up. If your application is internal you may want to develop for a specific browser and OS combination. This saves you the problems of browser inconsistencies and allows you take advantage of OS specific tools.

APPLYING LAMP:

1. Storing our data: Our data is going to be stored in the MySQL Database. One instance of MySQL can contain many databases. Since our data will be stored in MySQL it will be searchable, extendable, and accessible from many different machines or from the whole World Wide Web.
2. User Interface: Although MySQL provides a command line client that we can use to enter our data we are going to build a friendlier interface. This will be a browser-based interface and we will use PHP as the glue between the browser and the Database.
3.Programming: PHP is the glue that takes the input from the browser and adds the data to the MySQL database. For each action add, edit, or delete you would build a PHP script that takes the data from the html form converts it into a SQL query and updates the database.

4.Security: The standard method is to use the security and authentication features of the apache web server. The tool mod_auth allows for password based authentication. You can also use allow/deny directives to limit access based on location. Using one or both of these apache tools you can limit access based on who they are or where they are connecting from. Other security features that you may want to use would be mod_auth_ldap, mod_auth_oracle, certificate based authentication provided by mod_ssl.


When not to use LAMP?

Applications not well suited for LAMP would include applications that have a frequent need for exchanging large amounts of transient data or that have particular and demanding needs for state maintenance. It should be remembered that at the core http is a stateless protocol and although cookies allow for some session maintenance they may not be satisfactory for all applications. If you find yourself fighting the http protocol at every turn and avoiding the “url as a resource mapped to the file system” arrangement of web applications then perhaps LAMP is not the best choice for that particular application.

ADVANTAGES OF LAMP:

 Seamless integration with Linux, Apache and MySQL to ensure the highest levels of availability for websites running on LAMP.
 Full 32bit and 64bit support for Xeon, Itanium and Opteron-based systems runs on enterprise Linux distributions from Red Hat and SuSE.
 Supports Active/Active and Active/Standby LAMP Configurations of up to 32 nodes.
 Data can reside on shared SCSI, Fiber Channel, Network Attached Storage devices or on replicated volumes.
 Maximizes Ecommerce revenues, minimizes Ebusiness disruption caused by IT outages.
 Automated availability monitoring, failover recovery, and fail back of all LAMP application and IT-infrastructure resources.
• Intuitive JAVA-based web interface provides at-a-glance LAMP status and simple administration.
• Easily adapted to sites running Oracle, DB2, and PostgreSQL .
• Solutions also exist for other Linux application environments including Rational Clear Case, Send mail, Lotus Domino and my SAP.

CONCLUSION:
While Flash, Active X, and other proprietary elements will continue to creep in and entice webmasters, in the end, compatibility issues and price of development will dictate what eventually win out in the next generation of web design. However, for the foreseeable future PHP, HTML, and databases are going to be in the future of interactive web design, and that's where I'm placing my bets. Open Source continues to play an important role in driving web technologies. Even though Microsoft would like to be the only player on the field, Open Source, with its flexibility will almost certainly be the winner in the end. Betting the farm on LAMP (Linux, Apache, MySql, PHP) seems much wiser to me than the alternative (Microsoft, IIS, Asp) ... not to mention it's a much less expensive route to follow.

A NOVEL TECHNIQUE TO ENHANCE THE SECURITY IN SYMMETRIC KEY CRYPTOGRAPHY

ABSTRACT
Cryptography is the science of keeping private information private and safe. In today’s high-tech information economy the need for privacy is far greater. In this paper we describe a common model for the enhancement of all the symmetric key algorithm like AES, DES and the TCE algorithm. The proposed method combines the symmetric key and sloppy key from which the new key is extracted. The sloppy key is changed for a short range of packet transmitted in the network

INTRODUCTION

Code books and cipher wheels have given way to microprocessors and hard drives, but the goal is still the same: take a message and obscure its meaning so only the intended recipient can read it. In today's market, key size is increased to keep up with the ever-growing capabilities of today's code breakers. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. A standard cryptanalytic attack is to know some plaintext matching a given piece of cipher text and try to determine the key, which maps one to the other. This plaintext can be known because it is standard or because it is guessed. If text is guessed to be in a message, its position is probably not known, but a message is usually short enough that the cryptanalyst can assume the known plaintext is in each possible position and do attacks for each case in parallel. In this case, the known plaintext can be something so common that it is almost guaranteed to be in a message. A strong encryption algorithm will be unbreakable not only under known plaintext (assuming the enemy knows all the plaintext for a given cipher text) but also under "adaptive chosen plaintext" -- an attack making life much easier for the cryptanalyst. In this attack, the enemy gets to choose what plaintext to use and gets to do this over and over, choosing the plaintext for round N+1 only after analyzing the result of round N. For example, as far as we know, DES is reasonably strong even under an adaptive
chosen plaintext attack. Of course, we do not have access to the secrets of government cryptanalytic services. Still, it is the working assumption that DES is reasonably strong under known plaintext and triple-DES is very strong under all attacks.
To summarize, the basic types of cryptanalytic attacks in order of difficulty for the attacker, hardest first, are: Cipher text only: the attacker has only the encoded message from which to determine the plaintext, with no knowledge whatsoever of the latter. A cipher text only attack is usually presumed to be possible, and a code's resistance to it is considered the basis of its cryptographic security. Known plaintext: the attacker has the plaintext and corresponding cipher text of an arbitrary message not of his choosing. The particular message of the sender’s is said to be ‘compromised’.
In some systems, one known cipher text-plaintext pair will compromise the overall system, both prior and subsequent transmissions, and resistance to this is characteristic of a secure code. Under the following attacks, the attacker has the far less likely or plausible ability to ‘trick’ the sender into encrypting or decrypting arbitrary plaintexts or cipher texts. Codes that resist these attacks are considered to have the utmost security. Chosen plaintext: the attacker has the capability to find the cipher text corresponding to an arbitrary plaintext message of his choosing. Chosen cipher text: the attacker can choose arbitrary cipher text and find the corresponding decrypted plaintext. This attack can show in public key systems, where it may reveal the private key. Adaptive chosen plaintext: the attacker can determine the cipher text of chosen plaintexts in an interactive or iterative process based on previous results. This is the general name for a method of attacking product ciphers called ‘differential cryptanalysis. A common model for the enhancement of the existing symmetric algorithms has been proposed.

METHODOLOGY

Advantage of formulating mathematically:
In basic cryptology you can never prove that a cryptosystem is secure. A strong cryptosystem must have this property, but having this property is no guarantee that a cryptosystem is strong. In contrast, the purpose of mathematical cryptology is to precisely formulate and, if possible, prove the statement that a cryptosystem is strong. We say, for example, that a cryptosystem is secure against all (passive) attacks if any nontrivial attack against the system is too slow to be practical. If we can prove this statement then we have confidence that our cryptosystem will resist any (passive) cryptanalytic technique. If we can reduce this statement to some well-known unsolved problem then we still have confidence that the cryptosystem isn't easy to break. Other parts of cryptology are also amenable to mathematical definition. Again the point is to explicitly identify what assumptions we're making and prove that they produce the desired results. We can figure out what it means for a particular cryptosystem to be used properly: it just means that the assumptions are valid. The same methodology is useful for cryptanalysis too. The cryptanalyst can take advantage of incorrect assumptions.
Compression aids encryption by reducing the redundancy of the plaintext. This increases the amount of cipher text you can send encrypted under a given number of key bits. Nearly all-practical compression schemes, unless they have been designed with cryptography in mind, produce output that actually starts off with high redundancy. Compression is generally of value, however, because it removes other known plaintext in the middle of the file being encrypted. In general, the lower the redundancy of the plaintext being fed an encryption algorithm, the more difficult the cryptanalysis of that algorithm. In addition, compression shortens the input file, shortening the output file and reducing the amount of CPU required to do the encryption algorithm. Compression after encryption is silly. If an encryption algorithm is good, it will produce output, which is statistically in distinguishable from random numbers and no compression algorithm will successfully compress random numbers.

TRIANGULAR-CODED ENCRYPTION ALGORITHM:
According to the Triangular Algorithm while encryption, compression too is completed. Consider a triangle ABC sides ‘a’, ‘b’ and ‘c’ opposite respectively. ‘a’ and ‘b’ are the actual data and ‘c’, the cipher text. Angle ‘C’ is the symmetric key, which is used for both encryption and decryption in this algorithm. Angle ‘a’ keeps changing for different measurements of side ‘a’ and ‘b’. The level of encryption is increased to enhance the security of the cipher text.


Figure1. Triangle formed by the plain texts ‘a’ and ‘b’ with C and A as the angle.In the encryption phase, the transmitter knows the sides ‘a’, ‘b’ and the angle ‘C’. We get the cipher text, ‘c’ from the sides ‘a’ and ‘b’ and the angle ‘C’. The angle ‘A’ too is calculated from the parameters ‘a’, ‘b’ and ‘C’. ‘C’ and ‘A’ are the parameters to be transmitted. The formula used to calculate the cipher text, ‘c’ from the sides ‘a’, ‘b’ and the angle ‘C’ of the triangle is given below.



Where
a: plain text1
b: plain text2
C: the secret key
c: the cipher text

Where
A: varying angle
a: plain text1
c: cipher Text
C: secret key

Now in the decryption phase, the receiver knows the parameters ‘c’, ‘A’ and ‘C’, which are used to extract the actual data ‘a’ and ‘b’. So it is obvious that C is the known symmetric key by both the sender and receiver. But the side a, changes for the constant value of C. Naturally the angle A’ too changes.
B = 180 – (A+C)
Where
B: opposite angle of ‘b’
A: varying angle
C: secret key
Where
a: plain text1
c: cipher text
A: varying angle
C: secret key


Where
b: plain text2
c: cipher text
B: opposite angle of ‘b’
C: secret key

Thus the plain text ‘a’ and ‘b’ are retrieved by the above formula. The values of the plain text ‘a’ and ‘b’ are ound based on cipher text ‘c’, ‘C’ the secret key and A the varying angle.



THE CRYPT ANALYSIS:
The sum of angles in a Triangle is 180.
(i.e.) θ1 + θ2 + θ3 = 180
Since θof a particular side (which is opposite to the base) is considered to be the secret key. It can vary from 1 to 178 where other two sides will take 1 degree each when θ1 takes its maximum value.
Mθ<= (180 – 1 – 1)
If θ1 or the key takes 7 decimal parts the range between 1 and 2 will be 1 * 10 ^ 7 and the Range between 1 and n for 7 decimals will be as follows
Rn = n * 10 ^ 7
Rn = Range for n
PROPOSE MODEL (Universal Security Reinforcement Model):
The Sender and receiver should have one more key called Sloppy key in addition to their Conventional key. This Sloppy key is changed dynamically (Sk) based on the data contained in the Skth data transmitted over net. This key is then synthesized with a conventional encryption key ‘Symmetric key’ (Smk) and a Synergistic key (Sk) is created with the help of the Sloppy key generator, Ø.
Sk = Ø ((sk), Smk Vc)
Where,
Smk - symmetric key
sk - The new key
Vc - Validity Count
Ø - Sloppy Key Generator (this may be any operation like addition, subtract, log, sin, cos etc)
Smk is symmetric key(conventional key).
Sk is sloppy key
Lets we will take an example.
The Model works as illustrated.
Let the data to be transmitted is

21 52 43 15 75 26 17 28 99 10 45
94 72 03 62 96 92 63 34 20
38 19 45 30 28 52 92 51 80 23

Assume first new key is 4. then for first 4 data upto 15, the new key is 4.for eg.for 52, the new key is 4, symmetric key is say 5 means ,the sloppy key is calculated using 4and 5 (eg: addition). so sloppy is 9..for next 4 data , sloppy key is 9.Then next new key is 15.(at 4th position)...then for next 15 data, the new key is calculated same as before..
Then next new key is 63. (At 15th position).The process is repeated.
So block wise we are changing that sloppy key. If u want 2 reduce the block size, we have 2 set the validity count Vc. so that hacking is difficult.

CONCLUSION:
In summary, a common model was suggested for the enhancement of all the crypto algorithms including the TCE algorithm emphasized in this paper. The main intention of this paper is to reinforce the Security of all Existing algorithms using the above said methodology. This model can be implemented where privacy in cryptanalysis is of much importance. The key concept of this approach is, that a sloppy key (Sk) is generated along with the symmetric key (Smk). This Sloppy key (Sk) is determined using the key adjuster (φ). The significance of the key adjuster (φ) is the breaking of the existing key. As far as the range within the Validity counter (Vc) is decreased; the breaking of the sloppy key (Sk) is frequent. This arises difficulty in hacking.

CRYPTOGRAPHY IN SMART CARDS

CRYPTOGRAPHY IN SMART CARDS

In the age of universal electronic connectivity, of viruses and hackers there is indeed no time at which security does not matter. The issue of security and privacy is not a new one however, and the age-old science of cryptography has been in use, since people had some information that they wish to hide. Cryptography has naturally been extended into realm of computers, and provides a solution electronic security and privacy issue.
As the technology increases, Smart Cards (e.g.: SIM cards, Bank cards, Health cards) play very important role in processing many transactions with high level of security.
This security level achieved by means of Cryptography. In this paper we are presenting an introduction to






1. INTRODUCTION

Cryptography comes from the Greek words for – “secret writing”. Cryptography is the science of enabling secure communications between a sender and one or more recipients. It deals with a process associated with scrambling plain text (ordinary text, or clear text) into cipher text (a process called encryption) then back again (known as decryption).









Fig:Encryption model
An intruder is hacker or cracker who hears and accurately copies down the complete cipher text. Passive intruder only listens to the communication channel. But, active intruder can also record messages and play them back later, inject his own messages, or modify legitimate messages before they get to the receiver.



Cryptography concerns itself with four objectives:
1. Confidentiality (the information cannot be understood by any one for whom it was unintended)
2. Integrity (the information cannot be altered in storage or transit between sender and intended receiver without the alteration being detected).
3. Non-repudiation (the creator/sender of the information cannot deny at a later stage his or her intentions in the creation or transmission of the information).
4. Authentication (the sender and receiver can confirm each others identity and the origin/destination of the information).

2. TYPES OF ENCRYPTION
We have two variations
• Symmetric encryption
• Asymmetric encryption
In symmetric encryption, same key is used for both encryption and decryption. Consider a situation where Alice, a user from company A, is electronically communicating with Bob, a user of company B
In the figure of Symmetric communication between Alice and bob Alice would encrypt her message using a key, and then send a message to Bob. Alice would separately communicate the key to Bob to allow him to decrypt the message. To maintain security and privacy, Alice and Bob need to ensure that the key remains private to them.
Symmetric encryption can be implemented by
 DES – The Data Encryption Standard
 AES – The Advanced Encryption Standard
 Cipher modes
In Asymmetric encryption, separate keys are used for encryption and decryption

Fig: Asymmetric communication between Bob and Alice
Here, Alice is sending a message to Bob. Alice creates her message then encrypts it using Bob’s public key. When Bob receives the encrypted message, he uses his secret, private key to decrypt it. As long as Bob’s private key has not been
compromised then both Alice and Bob know that the message is secure.
Asymmetric Encryption can be implemented by
 RSA (Rivest, Shamir, Adleman)
Other public key Algorithms



3. APPLICATIONS OF CRYPTOGRAPHY:
The following are some of the applications of cryptography.
• Digital Signatures
• Digital Certificates.
• Message Digest.
• Secure Socket Layer.
• Secure E-Business
• Secure IP.
• Challenge/Response systems (Smart cards).
In this paper we are concentrating on Smart Cards.
4. SMART CARDS:
Smart cards are an ideal means to provide the required level of security. In recent years, smart card technology has quickly advanced and by now reached a state where smart cards easily integrate into public key infrastructures. Today's smart cards provide memory, and they have cryptographic coprocessors that allow them to generate digital signatures using the RSA.

a) Architecture:
A smart card is a credit card sized plastic card with an integrated circuit (IC) contained inside. The IC contains a microprocessor and memory, which gives smart cards the ability to process, as well as store more information.

Fig: Contact chip and Smart card architecture


The figure shows the architecture of smart card, which contains RAM, ROM, FLASH memory, and a Coprocessor. Smart cards uses RAM for temporary storage and ROM as a bootstrap for loading the operating system. FLASH memory allows much higher data storage capacity on the card. It has an on-chip dedicate Coprocessor called Crypto Processor with key generation and asymmetric algorithm acceleration.
Contact chip is a standard transistor that was created from a lithographic process as a series of etched and plated regions on a tiny sheet of silicon.
A smart card can be used for payment transactions, such as purchases, and non-payment transaction, such as information storage and exchange.

b) Role of Cryptography:
The smart card provides two types of security services user authentication and digital signature generation. Smart cards are specifically designed to perform these services with a high level of security. Authentication of users means proving that users are who they say they are. There are various ways to implement authentication using a smart card, but in this paper we are presenting smart cards with crypto processors.Smart cards data storage capability structure is comparable with directory structure of disk media.
The main structure is based on three component types:
• Master File (MF), the root directory
• Dedicated file (DF), application directories or sub-directories
• Elementary file (EF), data files.
On the smart card there is only one Master File that contains some data files with global information about the smart card and its holder.
Dedicated files are directories that can be set under the root directory. Each application has a directory of its own. An application directory can have one or more sub directories.
Each directory has some specific elementary files, which contains secret cryptographic keys. All Dedicated and Elementary files have access conditions to execute a command on a file.
c) Cryptographic computations by Smart Cards:
The maximal length of data that can be encrypted by the smart card and that is not stored on the smart card is 8 bytes. The command that provides the encryption is called INTERNAL AUTHENTICATION and is developed to authenticate the smart card to the outside world. The command requires a random number from the outside world and a secret key that is stored on the smart card. The random number is encrypted with a secret key by the smart card to access the information.
The smart card is also able to compute a Message Authentication Code (MAC) over data that is stored on the smart card. A MAC that is computed by the smart card is also called a stamp.
All data is stored unencrypted on a smart card. A smart card can encrypt data that is stored in specific files on the smart card. The encryption is possible for a file that has access condition ENC (ENCrypted) for the read command.
d) Storage of Secret keys on Smart Card
The architecture of smart cards allows storing secret cryptographic keys in safe manner. The stored keys can only be used to perform cryptographic computations but not for reading. The keys are stored in specific data files called EF_KEY. The initial secret keys are written on the smart card during the initialization process performed by the card issuer. To write a new secret key Knew on the smart card, secret keys are needed that are (already) stored in the smart card.
Smart card makes use of two kinds of secret keys
 Management key
 Operational key.
A management key is used to encrypt another management key or an operational key that have to be written on the smart card. A management key is also called a Key Encrypting Key (KEK).
An operational key is used by the smart card to perform data cryptographic operations

5. APPLICATIONS OF SMART CARD:
Smart cards are used for huge range of applications today. A few common examples of applications are briefly described here.

i) SIM cards:
A common application for Smart Cards is for mobile phones. The central security processor of a mobile phone is provided by a global system for mobile communication SIM (Subscriber Identity Module). The use of SIM cards has radically improved security of digital phones compared to the older analogue devices.


ii) Bank Cards:
Increasingly credit and debit cards are being used, using the contact chip rather than being swiped. The security feature offered by Smart Cards protect consumers from their cards being cloned as it is much more difficult to copy a chip protected cryptographically than a magnetic strip.
iii) Health Cards:
Increasingly, Smart Cards are being used to store a citizen’s medical data. The cards are carried by the citizen and can contain information such as list of allergies, current and past medications, past treatment history, disease history and doctors notes. This enables medical information to be easily accessed in an emergency.

Consider the scenario how a smart card works for banking.

Stage 1: This is the initial process where the enrollment of customer can takes place; the image and details of customer are saved on card.
 Evaluation Scenario of Smart cards
Stage 2: After the enrollment process money loaded and wallet value is updated.
Stage 3: When customer inserts the card for money, the system read the data from the card, to verify the validity of customer.
Stage 4: After verification the machine facilitates to credit or debit on the customer’s account. Finally the wallet value is updated.

6. MERITS AND DEMERITS:
High-level security can be achieved using cryptography in smart cards. Data present in the smart card is more secured and can be viewed only by the authorized persons only.
Although this system is very effective as protection, due to the large amount processing power needed to run this system it is impossible for use on older, slower computers without the necessary processing power to use such an extensive encryption system. Weak-authentication may break the security provided by the smart card.

7. CONCLUSION:
Cryptography provides a solution to the problem of security and privacy issues. The usage of cryptography in Smart Cards became very popular. Smart card technology can be implemented for multi-applications such as Bankcards, SIM cards, and Health cards.
As card technologies continue to develop we can expect to see advanced cards interacting directly with users through displays, biometric sensors and buttons. This will open up many exciting novel applications, and further increase the usability of Smart Cards.


Achieving higher QOS by GPRS, WLAN Integration

ABSTRACT:-
GPRS (General Packet Radio Service) is a packet based communication service for mobile devices that allows data to be sent and received across a mobile telephone network. GPRS is a step towards 3G and is often referred to as 2.5G. As the wireless technology evolves, one can access the Internet almost everywhere via many wireless access networks such as wireless LAN and GPRS. People would like to use the wireless networks with high data rate, large coverage and low cost. Some networks such as GPRS can provide large coverage, but they only provide low data rate; some networks like wireless LAN can provide high data rate, but the access points are not widely deployed. None of the wireless


Networks can meet all requirements of a mobile user. Heterogeneous networks solve parts of the problem. In heterogeneous networks, users can roam among different kind of networks such as 802.11 wireless LAN and GPRS through vertical handoffs. But in heterogeneous networks, each kind of wireless networks provide different quality of services. Users roaming among the wireless networks will suffer enormous change of quality of services. The paper proposed three access network selection strategies that keep mobile users staying in the wireless networks with higher quality services longer and thus improves the average available bandwidth and decreases the call blocking probability.


Introduction:

IEEE 802.11 wireless LAN is the most popular high data rate wireless network. But the coverage of an access point is too small, and the access points are not widely deployed and well organized. Users cannot receive the WLAN services ubiquitously and have to change their settings when they are in different WLAN.
On the other way, cellular systems like GPRS can provide services almost everywhere, but they cannot have a data rate like WLAN. Vertical handoffs in the heterogeneous works let users can get service from both GPRS and WLAN. Users who leave the coverage of an access point can vertically handover to the GPRS networks, and the Internet service. IEEE 802.11g has a 54 Mbps transmission rate while GPRS has only 171 kbps for optimal transmission rates for the users will not be terminated. The paper proposes new


Mobility strategies to extend the time mobile hosts staying in higher quality networks in the heterogeneous network environment by using ad hoc network. In an ad hoc network, mobile hosts relay messages for other mobile hosts. Such characteristic helps to extend the service range of an access point while there are mobile hosts available to form a path that are able to relay messages to the access point.
Interworking mechanisms:-



The integration of WLAN into GPRS will provide users in “hot-spot” areas to use the high-speed wireless network, and when outside a hot-spot coverage area, use the cellular data network. This is however not simple to implement as it must provide services such as: session continuity, integrated billing and authentication between networks, inter-carrier roaming, and most importantly, provide a seamless user experience.
Some Existing coupling methods:
1. Tight coupling methods:


In general, the proposed tight coupling architecture provides a novel solution for internetworking between 802.11 WLANs3 and GPRS, and features many benefits, such as:
• Seamless service continuation across WLAN and GPRS. The users are able to maintain their data sessions as they move from WLAN to GPRS and vice versa.
• Reuse of GPRS AAA.
• Reuse of GPRS infrastructure (e.g., core network resources, subscriber databases, billing systems) and protection of cellular operator’s investment.
• Support of lawful interception for WLAN subscribers.
• Increased security, since GPRS authentication and ciphering can be applied on top of WLAN ciphering.
• Common provisioning and customer care.
2.Loose Coupling Methods:


Loose coupling is another approach that provides internetworking between GPRS and WLAN. As can be seen, the WLAN network is coupled with the GPRS network in the operator’s IP network. Note that, in contrast to tight coupling, the WLAN data traffic does not pass through the GPRS core network but goes directly to the operator’s IP network.
Disadvantage of Existing Methods:


• After coupling between WLAN and GPRS Network cannot easily support third-party WLANs.
• Throughput capacities are very less.

• More important, tight coupling cannot support legacy WLAN terminals, which do not implement the GPRS protocols.
• Cost is more to implemententation.
The Proposed Strategies:


In the paper, the heterogeneous network is composed of WLAN, ad hoc WLAN and GPRS network. With the use of ad hoc WLAN network, mobile hosts can access Internet with others’ relaying to a WLAN AP. In original heterogeneous network environment, mobile hosts will prefer WLAN. But if no WLAN AP available, the mobile hosts will handover to the GPRS networks to keep the connections alive. With the use of ad hoc WLAN, mobile hosts have another alternative when there is no WLAN AP available. They can choose ad hoc WLAN. However, there may be more than one mobile host can relay packets to more than one access points. Mobile hosts may select one of the best relay mobile hosts, or decide not to use the ad hoc network. One of the best relay mobile hosts, or decides not to use the ad hoc network.


Mobile wireless network is the infrastructure less mobile network, commonly known as an ad hoc Network. Infrastructures less networks have no fixed routers. All nodes are capable of movement and can be connected dynamically in an arbitrary manner. Nodes of these networks function as routers which discover and maintain routes to other nodes in the network.
Selection strategies:-
Making such decisions will be a problem, and three selection strategies are proposed. The selection strategies are detailed below,
A. Fixed hop counts (FHC)
In the strategy, the ad hoc route cannot be longer than n hops; the mobile host first finds the access points, if no access point available, the mobile host will try to find a mobile host has a route shorter than n – 1 hops away from an access point. If more than one route shorter than n – 1 hops, select the shortest one. If more than one route is the shortest hop counts, select the AP has same IP range with itself. If no AP has same IP range, select arbitrary one. If no route is shorter than n – 1 hops, try to select GPRS network.


B. Any available route (AAR)
In the strategy, any ad hoc route will be chosen if there are no higher service networks available, the mobile host will try to find a mobile host that has a shortest route to an access point. If no route is available, try to select GPRS network.
C. Bandwidth pre-evaluation (BPE)
In the third strategy, the network status will be measured before selection; ad hoc networks will be select only if they have a higher quality of service than the GPRS network. In the proposed strategy, when a mobile host tries to initiate a call, it will look for WLAN AP, ad hoc WLAN relay host and GPRS networks sequently. And if none of the network can be selected, the connection is rejected. When a user leaves the coverage of a GPRS cell or an access point, a handoff occurred. The cases are more complicated than call initiation, and we discussed the three cases separately.


Call initiation in network:-

In the proposed strategy, when a mobile host tries to initiate a call, it will look for WLAN AP, ad hoc WLAN relay host and GPRS networks sequently. And if none of the network can be selected, the connection is rejected. When a user leaves the coverage of a GPRS cell or an access point, a handoff occurred. The cases are more complicated than call initiation, and we discussed the three cases separately.
A. Handoff from WLAN:-
First, try to find another WLAN AP. If no other AP is available, try to select an ad hoc WLAN network. And if no ad hoc WLAN is qualified, try to select the GPRS network. Finally, if no GPRS network is available, the connection will be forced terminated.
B. Handoff from ad hoc WLAN:-
First, try to find a WLAN AP. If no AP is available, try to select an ad hoc WLAN network. And if no ad hoc WLAN is qualified, try to select the GPRS network. Finally, if no GPRS network is available, the connection will be forced terminated.
C. Handoff from GPRS:-
First, try to find another GPRS base station. If no other base station is available, try to find a WLAN AP. If no AP is available, try to select an ad hoc WLAN network. And if no ad hoc WLAN is qualified, the connection will be forced terminated.
Conclusions:-
Proposed strategies can reduce the times a user changes his/her IP address. The advantage disappears with the increase of mobility, because the route cannot be maintained in a high mobility network. Here, three mobility strategies are proposed to improve the service quality for mobile hosts in heterogeneous networks by using ad hoc routing. Using the proposed strategies, the average available bandwidth can be two times more than no strategy applied, and the request-blocking rate can have a 94% reduction at most and a 50% reduction in average. The change of IP address is a serious problem for mobile users, and the proposed strategies can have a 9% improvement in the times of IP address changing. It helps to ease the impact of the mobile IP protocols to the real time applications.
However, the drawback of the ad hoc networks is inherited in the proposed strategies. The handoff opportunity rises due to the unstable of relaying host. This can be prevented by using an ad hoc routing protocol that considered the stability or reducing the length of an ad hoc route.

NETWORK SECURITY Honeypot Solutions




NETWORK SECURITY






Honeypots are an exciting new technology.In the past several years there has been growing interest in exactly what this technology is and how it works. The purpose of this paper is to introduce you to honeypots and demonstrate their capabilities.
A honeypot is a security resource whose value lies in being probed, attacked, or compromised. The key point with this definition is honeypots are not limited to solving only one problem; they have a number of different applications. To better understand the value of honeypots, we can break them down into two different categories:
1.Production
2.Research..
A properly constructed honeypot is put on a network, which closely monitors the traffic to and from the honeypot. This data can be used for a variety of purposes
 Forensics - analyzing new attacks and exploits
 Trend analysis - look for changes over time of types of attacks, techniques, etc
 Identification - track the bad guys back to their home machines to figure out who they are
 Sociology - learn about the bad guys as a group by snooping on email, IRC traffic, etc which happens to traverse the honeypot.Traditionally, honeypots have been physical systems on a dedicated network that also contains multiple machines for monitoring the honeypot and collecting logs from it.
This paper throws further light on the advantages and the disadvantages of honeypots and on some honeypots solutions. For sure, Honeypots are a boon to the field of Network Security.






Introduction:
Many people have their own definition of what a honeypot is, or what it should accomplish. Some feel its a solution to lure or deceive attackers, others feel its a technology used to detect attacks, while other feel honeypots are real computers designed to be hacked into and learned from. In reality, they are all correct.
Definitions and Value of Honeypots:
Over the past several years there has been a growing interest in honeypots and honeypot related technologies. Honeypots are not a new technology; they were first explained by a couple of very good papers by several icons in computer security. There are a variety of misconceptions on what a honeypot is, how it works, and how it adds value. It is hoped this paper helps clear up those issues.
We may define a honeypot as "a security resource whose value lies in being probed, attacked or compromised." This means that whatever we designate as a honeypot, it is our expectation and goal to have the system probed, attacked, and potentially exploited. Keep in mind, honeypots are not a solution. They do not 'fix' anything. Instead, honeypots are a tool. How you use that tool is up to you and depends on what you are attempting to achieve. A honeypot may be a system that merely emulates other systems or applications, creates a jailed environment, or may be a standard built system. Regardless of how you build and use the honeypot, it's value lies in the fact that it is attacked.
We will break honeypots into two broad categories
1.Production Honeypot
2.Research Honeypot
Production Honeypot:
The purpose of a production honeypot is to help mitigate risk in an organization. The honeypot adds value to the security measures of an organization. Traditionally, commercial organizations use production honeypots to help protect their networks. It adds value to the security of production resources. Lets cover how production honeypots apply to the three areas of security, Prevention, Detection, and Reaction.

Prevention:
Honeypots will not help keep the bad guys out. What will keep the bad guys out is best practices, such as disabling unneeded or insecure services, patching what you do need, and using strong authentication mechanisms. It is the best practices and procedures such as these that will keep the bad guys out. A honeypot, a system to be compromised, will not help keep the bad guys out. In fact, if incorrectly implemented, a honeypot may make it easier for an attacker to get in.
Some individuals have discussed the value of deception as a method to deter attackers. The concept is to have attackers spend time and resource attacking honeypots, as opposed to attacking production systems. The attacker is deceived into attacking the honeypot, protecting production resources from attack. Deception may contribute to prevention, but you will most likely get greater prevention putting the same time and effort into security best practices.
Detection:
While honeypots add little value to prevention, they add extensive value to detection. For many organizations, it is extremely difficult to detect attacks. Intrusion Detection Systems (IDS) are one solution designed for detecting attacks. However, IDS administrators can be overwhelmed with false positives. False positives are alerts that were generated when the sensor recognized the configured signature of an "attack", but in reality was just valid traffic. The problem here is that system administrators may receive so many alerts on a daily basis that they cannot respond to all of them. Also, they often become conditioned to ignore these false positive alerts as they come in day after day.The very IDS sensors that they were depending on to alert them to attacks can become ineffective unless these false positives are reduced. This does not mean that honeypots will never have false positives, only that they will be dramatically less than with most IDS implementations.
Another risk is false negatives, when IDS systems fail to detect a valid attack. Many IDS systems, whether they are signatures based, protocol verification, etc can potentially miss new or unknown attacks. It is likely that a new attack will go undetected by currently IDS methodologies. Also, new IDS evasion methods are constantly being developed and distributed. It is possible to launch a known attack that may not be detected, such as with K2's ADM Mutate. Honeypots address false negatives as they are not easily evaded or defeated by new exploits. In fact, one of their primary benefits is that they can most likely detect when a compromise occurs via a new or unknown attack by virtue of system activity, not signatures. Administrators also do not have to worry about updating a signature database or patching anomaly detection engines. Honeypots happily capture any attacks thrown their way. As discussed earlier though, this only works if the honeypot itself is attacked.
Reaction:
Often when a system within an organization is compromised, so much production activity has occurred after the fact that the data has become polluted. Incident response team cannot determine what happened when users and system activity have polluted the collected data.
The second challenge many organizations face after an incident is that compromised systems frequently cannot be taken off-line. The production services they offer cannot be eliminated. As such, incident response teams cannot conduct a proper or full forensic analysis.
Honeypots can add value by reducing or eliminating both problems. They offer a system with reduced data pollution, and an expendable system that can be taken off-line. For example, let’s say an organization had three web servers, all of which were compromised by an attacker. However, management has only allowed us to go in and clean up specific holes. As such, we can never learn in detail what failed, what damage was done, is there attacker still had internal access, and if we were truly successful in cleanup.
However, if one of those three systems were a honeypot, we would now have a system we could take off-line and conduct a full forensic analysis. Based on that analysis, we could learn not only how the bad guy got in, but also what he did once he was in there. These lessons could then be applied to the remaining webservers, allowing us to better identify and recover from the attack.
Research Honeypot:
One of the greatest challenges the security community faces is lack of information on the enemy. Questions like who is the threat, why do they attack, how do they attack, what are their tools, and possibly when will they attack? It is questions like these the security community often cannot answer. For centuries military organizations have focused on information gathering to understand and protect against an enemy. To defend against a threat, you have to first know about it. However, in the information security world we have little such information.
Honeypots can add value in research by giving us a platform to study the threat. What better way to learn about the bad guys then to watch them in action, to record step-by-step as they attack and compromise a system. Of even more value is watching what they do after they compromise a system, such as communicating with other blackhats or uploading a new tool kit. It is this potential of researches that is one of the most unique characteristics of honeypots. Also, research honeypots are excellent tools for capturing automated attacks, such as auto-rooters or Worms. Since these attacks target entire network blocks, research honeypots can quickly capture these attacks for analysis.
In general, research honeypots do not reduce the risk of an organization. The lessons learned from a research honeypot can be applied, such as how to improve prevention, detection or reaction. However, research honeypots contribute little to the direct security of an organization. If an organization is looking to improve the security of their production environment, they may want to consider production honeypots, as they are easy to implement and maintain. If organizations, such as universities, governments, or extremely large corporations are interested in learning more about threats, then this is where research honeypots would apply. The Honeynet Project is one such example of an organization using research honeypots to capture information on the blackhat community.

Honeypot Solutions:
Now that we have been discussing the different types of honeypots and and their value, lets discuss some examples.Simply put, the more an attacker can interact with a honeypot, the more information we can potentially gain from it, however the more risk it most likely has.The more a honeypot can do and the more an attacker can do to a honeypot, the more information can be derived from it. However, by the same token, the more an attacker can do to the honeypot, the more potential damage an attacker can do. For example, a low interaction honeypot would be one that is easy to install and simply emulates a few services. Attackers can merely scan, and potentially connect to several ports. Here the information is limited (mainly who connected to what ports when) however there is little that the attacker can exploit. On the other extreme would be high interaction honeypots. These would be actual systems. We can learn far much more, as there is an actual operating system for the attacker to compromise and interact with, however there is also a far greater level of risk, as the attacker has an actual operating system to work with. Neither solution is a better honeypot. It all depends on what you are attempting to achieve. Remember that honeypots are not a solution. Instead, they are a tool. Their value depends on what your goal is, from early warning and detection to research. Based on 'level of interaction', lets compare some possible honeypot solutions.
For this article, we will discuss four honeypots. There are a variety of other possible honeypots, however this selection covers a range of options. We will cover BackOfficer Friendly, Specter, Honeyd, and Homemade honeypots. This article is not meant to be a comprehensive review of these products. It only highlights some of their features. Instead, It hopes to cover the different types of honeypots, how they work, and demonstrate the value they add and the risks involved.
• BackOfficer Friendly:
BOF (as it is commonly called) is a very simple but highly useful honeypot.BOF is a program that runs on most Window based operating system. All it can do is emulate some basic services, such as http, ftp, telnet, and mail. Whenever some attempts to connect to one of the ports BOF is listening to, it will then log the attempt. BOF also has the option of "faking replies", which gives the attacker something to connect to. This way you can log http attacks, telnet brute force logins, or a variety of other activities. It can monitor only a limited number of ports, but these ports often represent the most commonly scanned and targeted services.
• Specter:
Specter is a commercial product similar to BOF in that it emulates services, but it can emulate a far greater range of services and functionality. In addition, not only can it emulate services, but emulate a variety of operating systems. Similar to BOF, it is easy to implement and low risk. Specter works by installing on a Windows system. The risk is reduced, as there is no real operating system for the attacker to interact with. For example, Specter can emulate a webserver or Telnet server of the operating system of your choice. When an attacker connects, it is then prompted with an http header or login banner. The attacker can then attempt to gather web pages or login to the system. This activity is captured and recorded by Specter, however there is little else the attacker can do. There is no real application for the attacker to interact with, instead just some limited, emulated functionality. Specter value lies in detection. It can quickly and easily determine who is looking for what. As a honeypot, it reduces both false positives and false negatives, simplifying the detection process.
• Home made Honeypots:
Another common honeypot is homemade. These honeypots tend to be low interaction. Their purpose is usually to capture specific activity, such as Worms or scanning activity. These can be used as production or research honeypots, depending on their purpose. Once again, there is not much for the attacker to interact with, however the risk is reduced because there is less damage the attacker can do. One common example is creating a service that listens on port 80 (http) capturing all traffic to and from the port. This is commonly done to capture Worm attacks. One such implementation would be using netcat, as follows:
netcat -l -p 80 > c:\honeypot\worm
In the above command, a Worm could connect to netcat listening on port 80. The attacking Worm would make a successful TCP connection and potentially transfer its payload. This payload would then be saved locally on the honeypot, which can be further analyzed by the administrator, who can assess the threat of the Worm.

• Honeyd:
Honeyd is an extremely powerful, OpenSource honeypot. Designed to run on Unix systems, it can emulate over 400 different operating systems and thousands of different computers, all at the same time. Honeyd introduces some exciting new features. First, not only does it emulate operating systems at the application level, like Specter, but it also emulates operating systems at the IP stack level. This means when someone Naps your honeypot, both the service and IP stack behave as the emulated operating system. Currently no other honeypot has this.Second, Honeyd can emulate hundreds if not thousands of different computers all at the same time. While most honeypots can only emulate one computer at any point in time, Honeyd can assume the identity of thousands of different IP addresses. Third, as an OpenSource solution, not only is it free to use, but it will exponentially grow as members of the security community develop and contribute code.

Value of Honeypots:
Honeypots have certain advantages (and disadvantages) as security tools. It is the advantages that help define the value of a honeypot. The beauty of honeypots lies in its simplicity. It is a device intended to be compromised, not to provide production services. This means there is little or no production traffic going to or from the device. Any time a connection is sent to the honeypot, this is most likely a probe, scan, or even attack. Any time a connection is initiated from the honeypot, this most likely means the honeypot was compromised. As there is little production traffic going to or from the honeypot, all honeypot traffic is suspect by nature. Now, this is not always the case. Mistakes do happen, such as an incorrect DNS entry or someone from accounting inputting the wrong IP address. But in general, most honeypot traffic represents unauthorized activity.
Advantages :
The advantages of honeypots include:
 Small Data Sets: Honeypots only collect attacks or unauthorized activity, dramatically reducing the amount of data they collect. Organizations that may log thousands of alerts a day may only log a hundred alerts with honeypots. This makes the data honeypots collect much easier to manage and analyze.
 Reduced False Positives: Honeypots dramatically reduce false alerts, as they only capture unauthorized activity.
 Catching False Negatives: Honeypots can easily identify and capture new attacks never seen before.
 Minimal Resources: Honeypots require minimal resources, even on the largest of networks. This makes them an extremely cost effective solution.
 Encryption: Honeypots can capture encrypted attacks.
 In-depth Information: Honeypots can capture data no other technology can, including the identity of your attacker, their motives, and whom they are potentially working with.
 IPv6: IPv6 is the new IP protocol that represents the future of the Internet and IP based networking. Most technologies cannot detect, capture, nor analyze IPv6 based traffic. Honeypots are one of the few technologies that can operate in any IPv6 (or IPv6 tunneled) environments.
Disadvantages:
• Single data point:
Honeypots all share one huge drawback; they are worthless if no one attacks them. Yes, they can accomplish wonderful things, but if the attacker does not send any packets to the honeypot, the honeypot will be blissfully unware of any unauthorized activity.
• Risk:
Honeypots can introduce risk to your environment. As we discuss later, different honeypots have different levels of risk. Some introduce very little risk, while others give the attacker entire platforms from which to launch new attacks. Risk is variable, depending on how one builds and deploys the honeypot.
It is because of these disadvantages that honeypots do not replace any security mechanisms. They can only add value by working with existing security mechanisms. Now that we have reviewed the overall value of honeypots, lets apply them to security.
Conclusion :
A honeypot is just a tool. How we use that tool is up to us. There are a variety of honeypot options, each having different value to organizations. We have categorized two types of honeypots, production and research. Production honeypots help reduce risk in an organization. While they do little for prevention, they can greatly contribute to detection or reaction. Research honeypots are different in that they are not used to protect a specific organization. Instead they are used as a research tool to study and identify the threats in the Internet community. You will have to determine what is the best relationship of risk to capabilities that exist for you. Honeypots will not solve an organization's security problems. Only best practices can do that. However, honeypots may be a tool to help contribute to those best practices.



EXAMPAPERS123.BLOGSPOT.COM