Monday, March 29, 2010

Network Security Cryptography



Network Security  Cryptography


This paper tries to present an insight into cryptography, the ways of implementing it, its uses and implications. Cryptography, the art and science of secret codes has been existent right from the advent of human civilization; it has been used to transmit messages safely and secretively across groups of people so that their adversaries did not get to know their secrets. As civilizations progressed more and more complex forms of cryptography came into being, they were now not only symbolic representations in an unrecognizable form but complex mathematical transforms carried out on the messages. In the present day world cryptography plays a major role in safe transmission of data across the Internet, and other means of communications.
In this paper we have dealt with examples of how different crypto algorithms are implemented, and have tried to cite some of the most used crypto algorithms, like DES- the data encryption standard, RSA, IDEA, RC#4, etc. We have also dealt with some of the applications of these algorithms like link encryption, pretty good privacy, public key cryptography, PEM etc. We have also cited some methods of code-breaking or cryptanalysis like the mathematical attack, the brute force attack and the power analysis.
Cryptography
If you want something to stay a secret, don't tell anyone, don't write it down. If you do have to send it to someone else, hide it in another message so that only the right person will understand. Many creative methods of hiding messages have been invented over the centuries. Cryptography can be defined as the art and science of secret codes. It is a collection of techniques that transform data in ways that are difficult to mimic or reverse by some one who does not know the secret. These techniques involve marking transforming and reformatting the messages to protect them from disclosure, change or both. Cryptography in the computer age basically involves the translation of the original message into a new and unintelligible one by a mathematical algorithm using a specific "key". People mean different things when they talk about cryptography. Children play with toy ciphers and secret languages. However, these have little to do with real security and strong encryption. Strong encryption is the kind of encryption that can be used to protect information of real value against organized criminals, multinational corporations, and major governments. Strong encryption used to be only military business; however, in the information society it has become one of the central tools for aintaining privacy and confidentiality.
Why do we need cryptography?
The art of long distance communication has been mastered by civilizations many centuries ago. The transmission of secret political or confidential information was a problem ever since. To solve this
problem to some extent, secret codes were developed by groups of people who had to carry out such kind of secretive communications. These codes were designed to transform words into code words using some basic guide lines known only to their members. Now messages could be sent or received with a reduced danger of hacking or forgery as the code breaker would have to struggle really hard to break the code.
As time progressed and radio, microwave and internet communication developed, more complex and safer codes started to evolve. The traditional use of cryptography was to make messages unreadable to the enemy during wartime. However the introduction of the computing age changed this perspective dramatically. Through the use of computers, a whole new use for information hiding was evolved. Around the early 1970's the private sector began to feel the need for cryptographic methods to protect their data. This could include 'sensitive information' (corporate secrets), password files or personal records.
Needfor Cryptography
Some day to day examples
Encryption technology is used nowadays in almost any of the digital communication systems. For example, the most common one is the satellite T.V or the cable T.V. All the signals are available in the air, but the programs can be viewed only by those subscribers who have made the payment. This is done by a simple password security system. The subscriber gets an authenticated password on payment and can use it only for the time he has paid up after which it gets lapsed. Another common application of the encryption technique is the ATM card. Here also the transaction is done only on the acceptance of a secure and authenticated password. The mobile phones and for that matter even the internet connections are based on small scale cryptographic techniques.
Crypto algorithm
The crypto algorithm specifies the mathematical transformation that is performed on data to encrypt or decrypt it. A crypto algorithm is a procedure that takes the plain text data and transforms it in to cipher text in a reversible way. A good algorithm produces cipher text that yields very few clues about either the key or the plain text that produced it. Some algorithm are for stream ciphers which encrypt a digital data stream bit by bit .The best known algorithm are for the block ciphers which transform data in fixed sized blocks one at a time.
• Stream ciphers
Stream cipher algorithm is designed to accept a crypto key and a stream of plain text to produce a stream of cipher text.
• Block cipher
Block ciphers are designed to take data blocks of a specific size, put them with a key of a particular size and yield a block of cipher text of a certain size. Block ciphers are
analyzed and tested for their ability to encrypt data blocks of their given block size. A reasonable cipher should generate a cipher text that has as few noticeable properties as possible. A statistical analysis of cipher text generated by the block cipher algorithm should find that individual data bits as well as patterns of bits appear completely random. Non random patterns are the first thing for which a code breaker looks as they usually provide the entering wedge needed to crack a code.
Cipher modes
The term cipher mode refers to a set of techniques used to apply to a block cipher to a data stream. Several modes have been developed to disguise repeated plaintext blocks and improve security of the block cipher. Each mode defines a method of combining the plaintext , crypto key, and encrypted cipher text in a special way to generate the stream of cipher text actually transmitted to the recipient In theory there could be countless different ways of combining and feeding back the inputs and outputs of a cipher. In practice, four basic modes are used.
• Electronic Code Book (ECB)
It is the simplest of all the modes .The cipher is simply applied to the plaintext block by block it is the most efficient mode. It can be speedup by using parallel hardware and unlike other modes, does not require an extra data word for seeding a feed back loop. However a block of padding may be needed to guarantee that full blocks are provided for encryption and decryption. ECB has security problems in the sense that repeated plain text blocks yield repeated cipher text blocks.
• Cipher Block Chaining (CBC)
This mode hides patterns in the plaintext block by systematically combining each plaintext block with a cipher text block before actually encrypting it the two blocks are combined bit by bit using the
exclusive or operation. In order to guarantee that there is always some random looking cipher text to apply to the actual plaintext the process is started with a block of random bits called the initialization vector. Two messages will never yield the same cipher text even if the plain texts are identical as long as the initialization vector is different. In most applications the initialization vector is added at the beginning of message in plain text. A shortcoming of CBC is that encrypted messages may be as many as two blocks longer than the same message in ECB mode. One of the blocks is added to transmit the initialization vector to the recipient. Proper decryption depends on the initialization vector to start the feedback process. The other block is added as padding so that a full block is always encrypted or decrypted.
• CFB - Cipher Feedback mode
CFB is similar to CBC in that, it feeds the cipher text block back to the block cipher. However it is different because the block cipher doesn’t directly encrypt the plain text. Instead it is used to generate a constantly varying key that encrypts the plain text with a vernam cipher. In other words blocks of cipher text are exclusive or’ed with successive blocks of data generated by the block cipher to generate the cipher text. This mode is also called the cipher text auto key (CTAK).The advantage with this method is that it is not limited to the cipher block size. This mode can be adapted to work with smaller blocks down to bits. Like CBC however it needs an initialization vector to be sent for decryption.
• OFB - Output Feedback
It is similar to CFB but it is simpler. It uses the block cipher all by itself to generate the vernam keys. The key stream doesn’t depend on the data stream at all. Here the block cipher has nothing to do with processing the message .It is only used to generate the keys. This mode is also called auto key mode. The advantage is that like CFB length of the plain text need not have to fit into block boundaries, also each block requires the initialization vector and doesn’t depend on the data stream, so the decryption key stream can be prepared and kept at the receivers end with the knowledge of the key and the initialization vector.
Crypto Algorithms
1. DES
This is a widely used algorithm. It was developed by IBM (previously Lucifer) and was adopted as an official Federal Information Processing Standard (FIPS PUB 46) in 1976.This algorithm uses a 64 bit key (including 8 parity bits +56 key bits), converting 64 bit blocks of plaintext in to 64 bit blocks of code (block cipher method).This is done by putting the original text through a series of permutations and substitutions. The results are then merged with the original plain text using an XOR operation. This encryption sequence is repeated 16 times using a different arrangement of the key bits each time.
2. One time pads
A one-time pad is a very simple yet completely unbreakable symmetric cipher. That is it uses the same key for encryption as for decryption. As with all symmetric ciphers, the sender must transmit the key to the recipient via some secure channel, otherwise the recipient won't be able to decrypt the cipher ext. The key for a one-time pad cipher is a string of random bits, usually generated by a cryptographically strong pseudo-random number generator (CSPRNG).With a one-time pad, there are as many bits in the key as in the plaintext. This is the primary drawback of a one-time pad, but it is also the source of its perfect security. It is essential that no portion of the key should ever be used for another encryption (therefore the name "one-time pad"), otherwise cryptanalysis can break the cipher. The algorithm is very simple for example an ex-or operation between the plain text and the key, the same ex-or operation would also give back the plain text.
Ciphertext = plaintext (+) key
Plaintext = ciphertext (+) key
However the security of the one time pad is dependant upon the randomness of the generated key. The code is supposed to be safe even from brute force attack, running the text through all possible keys, because equal number of probably correct plaintext messages would be generated.
3. Triple DES
Triple encoding makes DES encoded text even more secure. It is equivalent to having a 112 bit key length. However, triple DES is significantly slower than commercial alternatives with similar key lengths.
4. Rivest Cipher #4
Rc4 is a symmetrical stream cipher developed by Ron Rivest. It has a key whose size can be varied according to the level of security required, generally it can be used with a 128 bit key length. This algorithm is fairly immune to differential crypt analysis but since it is used with short key lengths it is vulnerable to brute force cracking.
5. Idea
Idea is an algorithm which appeared in 1990. It was developed at the Swiss federal institute of technology. Its security is based not on hiding the algorithm but on keeping a secret key. Its key is 128 bit long key which makes it more attractive than DES, and it can be used with the usual block ciphers. This algorithm is publicly available and easy to implement. It is suitable for e-commerce, it can be exported and used world wide. Of late none of the cryptanalysis techniques have worked against IDEA. A brute force attack (with its 128 bit key length) would take trying 1 billion keys/sec for over a billion yrs would still not find the key.
6. Skip Jack
It is a block encryption algorithm developed by NSA (National security agency –USA). It encrypts 64 bit blocks using an 80 bit key. The usual block ciphers can be used to implement it to encrypt streams of data. It is provided in prepackaged encryption chipsets and in the Fortezza crypto card, a pc card containing a crypto processor and storage for keying material. The disadvantage of skipjack is that it is very less publicly known {reportedly to keep NSA’s design techniques secret}.It is fairly resistant to differential cryptanalysis and other short cut attacks. NSA’s skipjack is being promoted to protect military communications in the Defense Messaging System (DMS) which reflects a measure of confidence that skipjack is secure.
7. RSA public key algorithm
The best known and most popular embodiment of the public key idea is the RSA also named after its inventors Ronald Rivest, Adi Shamir and Leonard Adleman. The high level of security the RSA algorithm offers derives from the difficulty of decomposing large integers in to prime factors. Two primes which when multiplied by one another give the original number. Prime factoring of very large numbers is an important field in the number theory .One of the drawbacks with RSA algorithm compared with symmetrical methods is that encrypting and decrypting messages takes much more computing power .The fastest RSA chip now in existence can only manage a through put of 600 k bits when using 512 b it primes. Comparable DES hardware implementations are anything from 1000 to 10000 times faster. At present Des software implementation can encrypt around 100 times faster than the RSA algorithm. Crypt analysis can be done by factorizing the key in to two primes, estimates for factoring a 512 bit key show that computer system running at a million operations a sec (1 MIPS) and using current algorithms would take 420000 years to find the prime factors involved.
8. AES
The AES is a new algorithm that has now replaced DES as the new standard in the NIST. The Advanced Encryption Standard (AES) provides a better combination of safety and speed than DES. Using 128-bit secret keys, AES offers higher security against brute-force attack than the old 56-bit
DES keys, and AES can use larger 192-bit and 256-bit keys, if necessary. AES is a block cipher, and encrypts data in fixed-size blocks, but each AES cycle encrypts 128 bits-twice the size of DES blocks. While DES was designed for hardware, AES runs efficiently in a broad range of environments, from programmable gate arrays, to smart cards, to desktop computer software and browsers. In 2000, NIST selected Rijndael, an encryption algorithm developed by two Belgian cryptographers, as the new AES. There are a few products that already use the Rijndael algorithm, notably Unix's NetBSD open-source version. Rijndael has also appeared as an option in several desktop file-encryption programs. The AES is expected to be the FIPS (Federal information processing standards) quite soon.
Internet cryptography techniques (Applications of the crypto algorithms)
• Point-to-point link encryption
• IP link encryption
• A virtual private network(VPN) constructed with IP security protocol routers
• A VPN constructed with IPSEC firewalls
• Public key algorithm with Pretty Good Privacy(PGP)
• E-mail with privacy enhanced mail (PEM)
• Water marking.
• Point-to-point link encryption
This produces a fully isolated connection between a pair of computers by applying crypto to the data link. It yields the highest security by being the most restrictive in physical and electronic access. It is not necessarily an internet solution since it doesn’t need to use TCP/IP software. It is the simplest design, but the most expensive to implement and extend.
• IP link encryption
This produces a highly secure extensible TCP/IP network by applying crypto to the data link and by restricting physical access to hosts on the network. This architecture blocks communication with untrusted hosts and sites. Sites use point to point interconnections and apply encryption to all traffic on those interconnections.
• VPN construction with IP security
This is a virtual private newt that uses the internet to carry traffic between trusted sites. Crypto is applied at the internet layer using IPSEC. This approach uses encrypting routers and doesn’t provide the sites with access to untrusted internet sites.
• VPN construction with IPSEC firewalls
This is a different approach to the VPN that uses encrypting firewalls instead of encrypting routers. Crypto is still applied at the internet layer using IPSEC (IP security protocol).The firewalls encrypt all traffic between trusted sites and also provide control access to untrusted hosts. Strong firewall access control is necessary to reduce the risk of attacks on the crypto mechanisms as well as attacks on hosts within the trusted sites.
Digital signature
Digital signatures can be used to check the authenticity of the author of the message using the above mentioned technique. In 1991the national institute of standards and technology (NIST) decided on a standard for digital signatures, DSS (digital signature standard). DSS proposes an algorithm for digital signatures (DSA, digital signatures algorithm) although this is not based on the RSA but on the public key implementation of the “discrete logarithm problem” (what value must the exponent x assume to satisfy y= g ^x mod p where p is a prime. While the problem underlying this method is just as hard to solve as RSA’s prime factor decomposition, many people have claimed that DSA’s security is not perfect. After massive criticism its length was finally increased from 512 to 1024 bits. DSS is expected to become an official standard for US Govt. bureaus in not too distant future.
• PEM
PEM is the standard for encrypting messages on the internet’s mail service. It uses both RSA public key method and the symmetrical DES method to send a file in encrypted form, it is first encrypted using a randomly generated DES key generated using a DES algorithm. The DES key itself is then encoded with the recipient’s public key on the RSA system and sent along with the DES encoded file. The advantage of this is that only a small part of the message, the DES key has to be encoded using the time consuming RSA algorithm. The contents of the message itself are encrypted much faster using the DES algorithm alone.
• Message Digests
There is one more important encryption technique worth mentioning and that is the one way function. It is basically a non reversible quick encryption. In other words encrypting is easy but decrypting is not. While encryption could take a few seconds, decryption could take 100s or 1000s or millions of years even for the most powerful computers. These are used basically to test the integrity of a document or file by generating a digital fingerprint using special hash functions on the document. Assume that you have a document to send someone or store for the future and you need a way to prove at sometime that the document has been altered. You run a one way function which produces a fixed length value called a hash (also called a message digest). The hash is a unique signature of a document that you can keep and send with the document. The recipient can run the same one way function to produce a hash that should match the one you sent with the document. If the hashes don’t match the document has been altered or corrupted.
• Water marking
A watermark is that which is actually imperceptibly added to the cover-signal in order to convey the hidden data. It is used to protect the copy rights of the author on the internet. A water mark is a hidden file consisting of either a picture or data that gets copied with the document when ever it is downloaded from the web, and because of this the article cannot be unauthorizedly copied or distributed.
Latest crypto techniques
The policy about regulating technology ends up being obsolete by technological innovations. Trying to regulate confidentiality by regulating encryption closes one door and leaves two open steganography and winnowing.
• Steganography
An encrypted message looks garbage like, and alert people that there is something to hide. But what if the message is totally innocuous looking? This is an old trick that started centuries ago with writing in ink that is invisible until the paper has been heated. The microdot, a piece of film containing a very highly reduced image of the secret message and embedded in the punctuation marks of a normal document, was invented during World War II.. For example if you used the least significant bit of each pixel in a bitmap image to encode a message the impact on the appearance of the image would not be noticeable. This is known as steganography, or covered writing. A 480 pixel wide by 100 pixel high image - smaller than many WWW home page banners, could theoretically contain a message of more than 5,000 characters. The encoding is quite easy with a computer - and no complicated mathematics at
all. And of course the same principles apply to audio and video files as well. The image can be used simply as a carrier, with the message being first encrypted.
• Winnowing and Chaffing
Just as the name suggests the above technique believes in adding chaff (garbage data) to the wheat (message) before sending the message and then winnowing or removing the chaff from the wheat at the receiver. Since winnowing does not use encryption it is not affected by the regulations on crypto products. The message is first broken into packets and then each packet is mac’ed using a mac program such as (HMAC-SHA1). This is very similar to running the program through a hash function. Then chaff is added (chaffing) to the packets of mac’ed data before it is sent. At the receiving end only those packets are accepted that produce the same mac (showing that no changes have been made) and then the chaff is removed, this is called winnowing.
Cryptanalysis
There are many kinds of cryptanalytic techniques:
1) Differential cryptanalysis.
2) Linear cryptanalysis.
3) Brute force cracking
4) Power analysis
5) Timing analysis, etc.
Cryptographers have traditionally analyzed the security of ciphers by modeling crypto algorithms as ideal mathematical objects. A modern cipher is conventionally a black box that accepts plaintext inputs and provides cipher text outputs. Inside this box, the algorithm maps the inputs to the outputs using a predefined function that depends on the value of a secret key. The black box is described mathematically and formal analysis is used to examine the systems security. In a modern cipher an algorithms security rests solely on the concealment of the secret key. Thus attack strategies often reduce to methods that can expose the value of the secret key. Unfortunately hardware implementations of the algorithm can leak information about the secret key, which the adversaries can use.
Mathematical attacks
Techniques such as differential and linear cryptanalysis introduced in early 1990s are representative of traditional mathematical attacks. Differential and linear cryptanalysis attacks work by exploiting statistical properties of crypto algorithms to uncover potential weaknesses. These attacks are not dependent on a particular implementation of the algorithm but on the algorithm itself, therefore they can be broadly applied. Traditional attacks however require the acquisition and manipulation of large amounts of data. Attacks that exploit weaknesses in a particular implementation are an attractive alternative and are often more likely to succeed in practice.
Implementation attacks
The realities of a physical implementation can be extremely difficult to control and often result in unintended leakage of side channel information like power dissipation, timing information, faulty outputs etc. The leaked information is often correlated to the secret key, thus enemies monitoring the information may be able to learn the secret key and breach the security of the crypto system. Algorithms such as DES, RSA which are now being implemented in smart cards also are under a considerable threat. Smart cards are often used to store crypto keys and execute crypto algorithms. Data on the card is also stored using cryptographic techniques.
Power consumption is one of the potential side channel information; generally supplied by an external source it can be directly observed. All calculations performed by the smartcard operate on logical 0s or 1s. Current technological constraints result in differential power consumptions when manipulating a logic one or a logic zero, based on a spectral analysis of the power curve or based on the timing between the one and the zero the secret code can be cracked by the adversaries.
Countermeasures
Many countermeasures are being worked out to prevent implementation attacks such as power analysis or timing analysis. These attacks are normally based on the assumption that the operations being attacked are occurring at fixed intervals of time. If the operations are randomly shifted in time then statistical analyisis of side channel information becomes very difficult. Another side of the coin is that the hardware implementations must be carefully designed so that they do not leak any side channel information. Hard ware design methodologies are often difficult to design, analyse and test, hence software methods of introducing delay or data masking are the only easy methods to overcome this problem.
Conclusion
The internet has brought with it an unparalleled rate of new technology adoption. The commercially established, the industry and the armed forces would need an assortment of cryptographic products and other mechanisms to provide privacy, authentication, message integrity and trust to achieve their missions. These mechanisms demand procedures, policies and law. However, cryptography is not an end unto itself but the enabler of safe business and communication. Good cryptography and policies are therefore as essential for the future of internet based communications as the applications that utilize them.

Tuesday, March 09, 2010

WEB TECHNOLOGY IN LAMP TECHNOLOGY




WEB TECHNOLOGY IN LAMP TECHNOLOGY


LAMP is a shorthand term for a web application platform consisting of Linux, Apache, My SQL and one of Perl or PHP. Together, these open source tools provide a world-class platform for deploying web applications. Running on the Linux operating system, the Apache web server, the My SQL database and the programming languages, PHP or Perl deliver all of the components needed to build secure scalable dynamic websites. LAMP has been touted as “the killer app” of the open source world.

With many LAMP sites running Ebusiness logic and Ecommerce site and requiring 24x 7 uptime, ensuring the highest levels of data and application availability is critical. For organizations that have taken advantage of LAMP, these levels of availability are ensured by providing constant monitoring of the end-to-end application stack and immediate recovery of any failed solution components. Some also supports the movement of LAMP components among servers to remove the need for downtime associated with planned system maintenance.

The paper gives an overview of LINUX, APACHE, MYSQL, and mainly on PHP and its advantage over other active generation tools for interactive web design and its interface with the advanced database like MYSQL and finally the conclusion is provided.








CONTENTS


 Introduction
 Linux
 Apache
 My SQL
 Feature included in my sql
 PHP
 Technologies on the client side
 Technologies on the server side
 The benefits of using PHP server side processing
 Browser and its issues
 Applying LAMP
 When not on to use LAMP?
 Advantages of LAMP
 Conclusion


















INTRODUCTION:
One of the great "secrets" of almost all websites (aside from those that publish static .html pages) is that behind the scenes, the web server is actually just one part of a two or three tiered application server system. In the open source world, this explains the tremendous popularity of the Linux-Apache-My SQL-PHP (LAMP) environment. LAMP provides developers with a traditional two tiered application development platform. There is a database, and a "smart" web server able to communicate with the database. Clients only talk to the web server , while the web server transparently talks to the database when required. The following diagram illustrates how a typical LAMP server works.
Fig. Example architecture of LAMP
By combining these tools you can rapidly develop and deliver applications. Each of these tools is the best in its class and a wealth of information is available for the beginner. Because LAMP is easy to get started with yet capable of delivering enterprise scale applications the LAMP software model just might be the way to go for your next, or your first application. Let’ take a look at the parts of the system.

LINUX:

LINUX is presently the most commonly used implementation of UNIX. Built from the ground up as a UNIX work-alike operating system for the Intel 386/486/pentium family of chips by a volunteer team of coders on the internet LINUX has revitalized the old-school UNIX community and added many new converts. LINUX development is led by Linux Torvalds. The core of the system is the LINUX kernel. On top of the kernel a LINUX distribution will usually utilize many tools from the Free Software Foundation’s GNU project. LINUX has gained a huge amount of momentum and support, both from individuals and large corporations such as IBM. LINUX provides a standards compliant robust operating system that inherits the UNIX legacy for security and stability. Originally developed for Intel x86 systems LINUX has been ported to small embedded systems on one end of the spectrum on up to large mainframes and clusters. LINUX can run on most common hardware platforms.

APACHE:

Apache is the most popular web server on the Internet. Apache like LINUX, My SQL and PHP is an open source project. Apache is based on the NCSA (National Center for Super Computing Applications) web server. In 1995-1996 a group of developers coalesced around a collection of patches to the original NCSA web server. This group evolved into the Apache Software foundation. With the release of Apache 2.0 apache has become a robust well documented multi-threaded web server. Particularly appealing in the 2.0 release is improved support for non-UNIX systems. Apache can run on a large number of hardware and software platforms. Since 1996 Apache has been the most popular web server on the Internet. Presently apache holds 67% of the market.

MySQL:

MySQL is a fast flexible Relational Database. My SQL is the most widely used Relational Database Management System in the world with over 4 million instances in use. MySQL is high-performance, robust, multi-threaded and multi user. MySQL utilizes client server architecture. Today, more than 4 million web sites create, use, and deploy MySQL-based applications. MySQL’ focus is on stability and speed. Supports for all aspects of the SQL standard that do not conflict with the performance goals are supported.

Features include:

 Portability. Support for a wide variety of Operating Systems and hardware
 Speed and Reliability
 Ease of Use
 Multi user support
 Scalability
 Standards Compliant
 Replication
 Low TCO (total cost of ownership)
 Quality Documentation
 Dual license (free and non-free)
 Full Text searching
 Support for transactions
 Wide application support


PHP:


What's next in the field of web design? It's already here. Today's webmasters are deluged with available technologies to incorporate into their designs. The ability to learn everything is fast becoming impossibility. So your choice in design technologies becomes increasingly important if you don't want to be the last man standing and left behind when everyone else has moved on. But before we get to that, lets do a quick review of the previous generation of web design.
In the static generation of web design, pages were mostly html pages that relied solely on static text and images to relay they information over the internet. Here the web pages lacked x and y coordinate positioning, and relied on hand coded tables for somewhat accurate placement of images and text. Simple, and straight to the point, web design was more like writing a book and publishing it online.
The second generation of web design (the one we are in now), would be considered the ACTIVE generation. For quite a while now the internet has been drifting towards interactive web designs which allow users a more personal and dynamic experience when visiting websites. No longer is a great website simply a bunch of static text and images. A great website is now one which allows, indeed, encourages user interaction. No longer does knowing HTML inside out make you a webmaster, although that does help a great deal!! Now, knowing how to use interactive technologies isn't just helpful, it's almost a requirement. Here are a few of the interactive technologies available for the webmasters of today.

Technologies on the client side:
1. Active X Controls: Developed by Microsoft these are only fully functional on the Internet Explorer web browser .This eliminates them from being cross platform, and thus eliminates them from being a webmasters number one technology choice for the future. Disabling Active X Controls on the IE web browser is something many people do for security, as the platform has been used by many for unethical and harmful things.

2. Java Applets: Java Applets are programs that are written in the Java Language. They are self contained and are supported on cross platform web browsers. While not all browsers work with Java Applets, many do. These can be included in web pages in almost the same way images can.

3. Dhtml and Client-Side Scripting: DHTML, java script, and vbscript. They all have in common the fact that all the code is transmitted with the original webpage and the web browser translates the code and create pages that are much more dynamic than static html pages. Vbscript is only supported by Internet Explorer. That again makes for a bad choice for the web designer wanting to create cross platform web pages. With Linux and other operating systems gaining in popularity, it makes little sense to lock you into one platform.
Of all the client side options available java script has proved to be the most popular and most widely used; once your an expert with HTML.

Technologies on the server side:
1. CGI: This stands for Common Gateway Interface. It wasn't all that long ago that the only dynamic solution for webmasters was CGI. Almost every webserver in use today supports CGI in one form or another. The most widely used CGI language is Perl. Python, C, and C++ can also be used as CGI programming languages, but are not nearly as popular as Perl. The biggest disadvantage to CGI for the server side is in it's lack of scalability. Although mod_perl for Apache and Fast CGI attempt to help improve performance in this department, CGI is probably not the future of web design because of this very problem.
2. ASP: Another of Microsoft's attempt's to "improve" things. ASP is a proprietary scripting language. Performance is best on Microsoft's own servers of course, and the lack of widespread COM support has reduced the number of webmasters willing to bet the farm on another one of Microsoft's silver bullets.

3. Java Server Pages and Java Servlets: Server side java script is Nets capes answer to Microsoft's ASP technology. Since this technology is supported almost exclusively on the Netscape Enterprise Sever, the likelihood that this will ever become a serious contender in the battle for the webmaster's attention is highly unlikely.

4. PHP: PHP is the most popular scripting language for developing dynamic web based applications. Originally developed by Rasmus Lerdorf as a way of gathering web form data without using CGI it has quickly grown and gathered a large collection of modules and features. The beauty of PHP is that it is easy to get started with yet it is capable of extremely robust and complicated applications. As an embedded scripting language PHP code is simply inserted into an html document and when the page is delivered the PHP code is parsed and replaced with the output of the embedded PHP commands. PHP is easier to learn and generally faster than PERL based CGI. However, quite unlike ASP, PHP is totally platform independent and there are versions for most operating systems and servers.

The benefits of using PHP server side processing include the following:
 Reduces network traffic.
 Avoids cross platform issues with operating systems and web browsers.
 Can send data to the client that isn't on the client computer.
 Quicker loading time. After the server interprets all the php code, the resulting page is transmitted as HTML.
 Security is increased, since things can be coded into PHP that will never be viewed from the browser.


BROWSER:

Since all the tools are in place to deliver html content to a browser it is assumed that control of the application will be through a browser based interface. Using the browser and HTML as the GUI (Graphical User Interface) for your application is frequently the most logical choice. The browser is familiar and available on most computers and operating systems. Rendering of html is fairly standard, although frustrating examples of incompatibilities remain. Using html and html-form elements displayed within a browser is easier than building a similarly configured user interface from the ground up. If your application is internal you may want to develop for a specific browser and OS combination. This saves you the problems of browser inconsistencies and allows you take advantage of OS specific tools.

APPLYING LAMP:

1. Storing our data: Our data is going to be stored in the MySQL Database. One instance of MySQL can contain many databases. Since our data will be stored in MySQL it will be searchable, extendable, and accessible from many different machines or from the whole World Wide Web.
2. User Interface: Although MySQL provides a command line client that we can use to enter our data we are going to build a friendlier interface. This will be a browser-based interface and we will use PHP as the glue between the browser and the Database.
3.Programming: PHP is the glue that takes the input from the browser and adds the data to the MySQL database. For each action add, edit, or delete you would build a PHP script that takes the data from the html form converts it into a SQL query and updates the database.

4.Security: The standard method is to use the security and authentication features of the apache web server. The tool mod_auth allows for password based authentication. You can also use allow/deny directives to limit access based on location. Using one or both of these apache tools you can limit access based on who they are or where they are connecting from. Other security features that you may want to use would be mod_auth_ldap, mod_auth_oracle, certificate based authentication provided by mod_ssl.


When not to use LAMP?

Applications not well suited for LAMP would include applications that have a frequent need for exchanging large amounts of transient data or that have particular and demanding needs for state maintenance. It should be remembered that at the core http is a stateless protocol and although cookies allow for some session maintenance they may not be satisfactory for all applications. If you find yourself fighting the http protocol at every turn and avoiding the “url as a resource mapped to the file system” arrangement of web applications then perhaps LAMP is not the best choice for that particular application.

ADVANTAGES OF LAMP:

 Seamless integration with Linux, Apache and MySQL to ensure the highest levels of availability for websites running on LAMP.
 Full 32bit and 64bit support for Xeon, Itanium and Opteron-based systems runs on enterprise Linux distributions from Red Hat and SuSE.
 Supports Active/Active and Active/Standby LAMP Configurations of up to 32 nodes.
 Data can reside on shared SCSI, Fiber Channel, Network Attached Storage devices or on replicated volumes.
 Maximizes Ecommerce revenues, minimizes Ebusiness disruption caused by IT outages.
 Automated availability monitoring, failover recovery, and fail back of all LAMP application and IT-infrastructure resources.
• Intuitive JAVA-based web interface provides at-a-glance LAMP status and simple administration.
• Easily adapted to sites running Oracle, DB2, and PostgreSQL .
• Solutions also exist for other Linux application environments including Rational Clear Case, Send mail, Lotus Domino and my SAP.

CONCLUSION:
While Flash, Active X, and other proprietary elements will continue to creep in and entice webmasters, in the end, compatibility issues and price of development will dictate what eventually win out in the next generation of web design. However, for the foreseeable future PHP, HTML, and databases are going to be in the future of interactive web design, and that's where I'm placing my bets. Open Source continues to play an important role in driving web technologies. Even though Microsoft would like to be the only player on the field, Open Source, with its flexibility will almost certainly be the winner in the end. Betting the farm on LAMP (Linux, Apache, MySql, PHP) seems much wiser to me than the alternative (Microsoft, IIS, Asp) ... not to mention it's a much less expensive route to follow.

A NOVEL TECHNIQUE TO ENHANCE THE SECURITY IN SYMMETRIC KEY CRYPTOGRAPHY

ABSTRACT
Cryptography is the science of keeping private information private and safe. In today’s high-tech information economy the need for privacy is far greater. In this paper we describe a common model for the enhancement of all the symmetric key algorithm like AES, DES and the TCE algorithm. The proposed method combines the symmetric key and sloppy key from which the new key is extracted. The sloppy key is changed for a short range of packet transmitted in the network

INTRODUCTION

Code books and cipher wheels have given way to microprocessors and hard drives, but the goal is still the same: take a message and obscure its meaning so only the intended recipient can read it. In today's market, key size is increased to keep up with the ever-growing capabilities of today's code breakers. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. A standard cryptanalytic attack is to know some plaintext matching a given piece of cipher text and try to determine the key, which maps one to the other. This plaintext can be known because it is standard or because it is guessed. If text is guessed to be in a message, its position is probably not known, but a message is usually short enough that the cryptanalyst can assume the known plaintext is in each possible position and do attacks for each case in parallel. In this case, the known plaintext can be something so common that it is almost guaranteed to be in a message. A strong encryption algorithm will be unbreakable not only under known plaintext (assuming the enemy knows all the plaintext for a given cipher text) but also under "adaptive chosen plaintext" -- an attack making life much easier for the cryptanalyst. In this attack, the enemy gets to choose what plaintext to use and gets to do this over and over, choosing the plaintext for round N+1 only after analyzing the result of round N. For example, as far as we know, DES is reasonably strong even under an adaptive
chosen plaintext attack. Of course, we do not have access to the secrets of government cryptanalytic services. Still, it is the working assumption that DES is reasonably strong under known plaintext and triple-DES is very strong under all attacks.
To summarize, the basic types of cryptanalytic attacks in order of difficulty for the attacker, hardest first, are: Cipher text only: the attacker has only the encoded message from which to determine the plaintext, with no knowledge whatsoever of the latter. A cipher text only attack is usually presumed to be possible, and a code's resistance to it is considered the basis of its cryptographic security. Known plaintext: the attacker has the plaintext and corresponding cipher text of an arbitrary message not of his choosing. The particular message of the sender’s is said to be ‘compromised’.
In some systems, one known cipher text-plaintext pair will compromise the overall system, both prior and subsequent transmissions, and resistance to this is characteristic of a secure code. Under the following attacks, the attacker has the far less likely or plausible ability to ‘trick’ the sender into encrypting or decrypting arbitrary plaintexts or cipher texts. Codes that resist these attacks are considered to have the utmost security. Chosen plaintext: the attacker has the capability to find the cipher text corresponding to an arbitrary plaintext message of his choosing. Chosen cipher text: the attacker can choose arbitrary cipher text and find the corresponding decrypted plaintext. This attack can show in public key systems, where it may reveal the private key. Adaptive chosen plaintext: the attacker can determine the cipher text of chosen plaintexts in an interactive or iterative process based on previous results. This is the general name for a method of attacking product ciphers called ‘differential cryptanalysis. A common model for the enhancement of the existing symmetric algorithms has been proposed.

METHODOLOGY

Advantage of formulating mathematically:
In basic cryptology you can never prove that a cryptosystem is secure. A strong cryptosystem must have this property, but having this property is no guarantee that a cryptosystem is strong. In contrast, the purpose of mathematical cryptology is to precisely formulate and, if possible, prove the statement that a cryptosystem is strong. We say, for example, that a cryptosystem is secure against all (passive) attacks if any nontrivial attack against the system is too slow to be practical. If we can prove this statement then we have confidence that our cryptosystem will resist any (passive) cryptanalytic technique. If we can reduce this statement to some well-known unsolved problem then we still have confidence that the cryptosystem isn't easy to break. Other parts of cryptology are also amenable to mathematical definition. Again the point is to explicitly identify what assumptions we're making and prove that they produce the desired results. We can figure out what it means for a particular cryptosystem to be used properly: it just means that the assumptions are valid. The same methodology is useful for cryptanalysis too. The cryptanalyst can take advantage of incorrect assumptions.
Compression aids encryption by reducing the redundancy of the plaintext. This increases the amount of cipher text you can send encrypted under a given number of key bits. Nearly all-practical compression schemes, unless they have been designed with cryptography in mind, produce output that actually starts off with high redundancy. Compression is generally of value, however, because it removes other known plaintext in the middle of the file being encrypted. In general, the lower the redundancy of the plaintext being fed an encryption algorithm, the more difficult the cryptanalysis of that algorithm. In addition, compression shortens the input file, shortening the output file and reducing the amount of CPU required to do the encryption algorithm. Compression after encryption is silly. If an encryption algorithm is good, it will produce output, which is statistically in distinguishable from random numbers and no compression algorithm will successfully compress random numbers.

TRIANGULAR-CODED ENCRYPTION ALGORITHM:
According to the Triangular Algorithm while encryption, compression too is completed. Consider a triangle ABC sides ‘a’, ‘b’ and ‘c’ opposite respectively. ‘a’ and ‘b’ are the actual data and ‘c’, the cipher text. Angle ‘C’ is the symmetric key, which is used for both encryption and decryption in this algorithm. Angle ‘a’ keeps changing for different measurements of side ‘a’ and ‘b’. The level of encryption is increased to enhance the security of the cipher text.


Figure1. Triangle formed by the plain texts ‘a’ and ‘b’ with C and A as the angle.In the encryption phase, the transmitter knows the sides ‘a’, ‘b’ and the angle ‘C’. We get the cipher text, ‘c’ from the sides ‘a’ and ‘b’ and the angle ‘C’. The angle ‘A’ too is calculated from the parameters ‘a’, ‘b’ and ‘C’. ‘C’ and ‘A’ are the parameters to be transmitted. The formula used to calculate the cipher text, ‘c’ from the sides ‘a’, ‘b’ and the angle ‘C’ of the triangle is given below.



Where
a: plain text1
b: plain text2
C: the secret key
c: the cipher text

Where
A: varying angle
a: plain text1
c: cipher Text
C: secret key

Now in the decryption phase, the receiver knows the parameters ‘c’, ‘A’ and ‘C’, which are used to extract the actual data ‘a’ and ‘b’. So it is obvious that C is the known symmetric key by both the sender and receiver. But the side a, changes for the constant value of C. Naturally the angle A’ too changes.
B = 180 – (A+C)
Where
B: opposite angle of ‘b’
A: varying angle
C: secret key
Where
a: plain text1
c: cipher text
A: varying angle
C: secret key


Where
b: plain text2
c: cipher text
B: opposite angle of ‘b’
C: secret key

Thus the plain text ‘a’ and ‘b’ are retrieved by the above formula. The values of the plain text ‘a’ and ‘b’ are ound based on cipher text ‘c’, ‘C’ the secret key and A the varying angle.



THE CRYPT ANALYSIS:
The sum of angles in a Triangle is 180.
(i.e.) θ1 + θ2 + θ3 = 180
Since θof a particular side (which is opposite to the base) is considered to be the secret key. It can vary from 1 to 178 where other two sides will take 1 degree each when θ1 takes its maximum value.
Mθ<= (180 – 1 – 1)
If θ1 or the key takes 7 decimal parts the range between 1 and 2 will be 1 * 10 ^ 7 and the Range between 1 and n for 7 decimals will be as follows
Rn = n * 10 ^ 7
Rn = Range for n
PROPOSE MODEL (Universal Security Reinforcement Model):
The Sender and receiver should have one more key called Sloppy key in addition to their Conventional key. This Sloppy key is changed dynamically (Sk) based on the data contained in the Skth data transmitted over net. This key is then synthesized with a conventional encryption key ‘Symmetric key’ (Smk) and a Synergistic key (Sk) is created with the help of the Sloppy key generator, Ø.
Sk = Ø ((sk), Smk Vc)
Where,
Smk - symmetric key
sk - The new key
Vc - Validity Count
Ø - Sloppy Key Generator (this may be any operation like addition, subtract, log, sin, cos etc)
Smk is symmetric key(conventional key).
Sk is sloppy key
Lets we will take an example.
The Model works as illustrated.
Let the data to be transmitted is

21 52 43 15 75 26 17 28 99 10 45
94 72 03 62 96 92 63 34 20
38 19 45 30 28 52 92 51 80 23

Assume first new key is 4. then for first 4 data upto 15, the new key is 4.for eg.for 52, the new key is 4, symmetric key is say 5 means ,the sloppy key is calculated using 4and 5 (eg: addition). so sloppy is 9..for next 4 data , sloppy key is 9.Then next new key is 15.(at 4th position)...then for next 15 data, the new key is calculated same as before..
Then next new key is 63. (At 15th position).The process is repeated.
So block wise we are changing that sloppy key. If u want 2 reduce the block size, we have 2 set the validity count Vc. so that hacking is difficult.

CONCLUSION:
In summary, a common model was suggested for the enhancement of all the crypto algorithms including the TCE algorithm emphasized in this paper. The main intention of this paper is to reinforce the Security of all Existing algorithms using the above said methodology. This model can be implemented where privacy in cryptanalysis is of much importance. The key concept of this approach is, that a sloppy key (Sk) is generated along with the symmetric key (Smk). This Sloppy key (Sk) is determined using the key adjuster (φ). The significance of the key adjuster (φ) is the breaking of the existing key. As far as the range within the Validity counter (Vc) is decreased; the breaking of the sloppy key (Sk) is frequent. This arises difficulty in hacking.