? "width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0" : "width=1100"' name='viewport'/> EDUCATION TIPS & STUDY TRICKS

Monday, November 01, 2010

History of computing hardware





SUCCESS FOR CAREER




'History of computing hardware:

History of computing hardware

The history of computing hardware is the record of the constant drive to make computer hardware faster, cheaper, and store more data.
Before the development of the general-purpose computer, most calculations were done by humans. Tools to help humans calculate are generally called calculators. Calculators continue to develop, but computers add the critical element of conditional response, allowing automation of both numerical calculation and in general, automation of many symbol-manipulation tasks. Computer technology has undergone profound changes every decade since the 1940s.
Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.
Aside from written numerals, the first aids to computation were purely mechanical devices that required the operator to set up the initial values of an elementary arithmetic operation, then propel the device through manual manipulations to obtain the result. An example would be a slide rule where numbers are represented by points on a logarithmic scale and computation is performed by setting a cursor and aligning sliding scales. Numbers could be represented in a continuous "analog" form, where a length or other physical property was proportional to the number. Or, numbers could be represented in the form of digits, automatically manipulated by a mechanism. Although this approach required more complex mechanisms, it made for greater precision of results.
Both analog and digital mechanical techniques continued to be developed, producing many practical computing machines. Electrical methods rapidly improved the speed and precision of calculating machines, at first by providing motive power for mechanical calculating devices, and later directly as the medium for representation of numbers. Numbers could be represented by voltages or currents and manipulated by linear electronic amplifiers. Or, numbers could be represented as discrete binary or decimal digits, and electrically-controlled switches and combinatorial circuits could perform mathematical operations.
The invention of electronic amplifiers made calculating machines much faster than mechanical or electromechanical predecessors. Vacuum tube amplifiers gave way to discrete transistors, and then rapidly to monolithic integrated circuits. By defeating the tyranny of numbers, integrated circuits made high-speed and low-cost digital computers a widespread commodity.
This article covers major developments in the history of computing hardware, and attempts to put them in context. For a detailed timeline of events, see the computing timeline article. The history of computing article treats methods intended for pen and paper, with or without the aid of tables. Since all computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers.
History of computing

Hardware before 1960
Hardware 1960s to present

Hardware in Soviet Bloc countries


Artificial intelligence

Computer science

Operating systems

Programming languages

Software engineering


Graphical user interface

Internet

Personal computers

Laptops

Video games

World Wide Web


Timeline of computing
• 2400 BC–1949
• 1950–1979
• 1980–1989
• 1990–1999
• 2000–2009
• More timelines...


More...

Contents
[hide]
• 1 Before computer hardware
• 2 Earliest hardware
• 3 1801: punched card technology
• 4 Desktop calculators
• 5 Advanced analog computers
• 6 Digital computation
o 6.1 Zuse
o 6.2 Colossus
o 6.3 American developments
 6.3.1 ENIAC
• 7 First-generation machines
• 8 Commercial computers
• 9 Second generation: transistors
• 10 Post-1960: third generation and beyond
• 11 See also
• 12 Notes
• 13 References
• 14 Further reading
• 15 External links

[edit] Before computer hardware
The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.[1]
[edit] Earliest hardware
Devices have been used to aid computation for thousands of years, using one-to-one correspondence with our fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in containers.[2][3] Counting rods is one example.
The abacus was used for arithmetic tasks. The Roman abacus was used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
A number of analog computers were constructed in ancient and medieval times to perform astronomical calculations. These include the Antikythera mechanism and the astrolabe from ancient Greece (c. 150–100 BC), which are generally regarded as the first mechanical analog computers.[4] Other early versions of mechanical devices used to perform some type of calculations include the planisphere and other mechanical computing devices invented by Abū Rayhān al-Bīrūnī (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Abū Ishāq Ibrāhīm al-Zarqālī (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (c. AD 1090) during the Song Dynasty.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[5] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer traveling across a gateway causing automatic doors to open every hour,[6][7] and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.[5]
Suanpan (the number represented on this abacus is 6,302,715,408)
Scottish mathematician and physicist John Napier noted multiplication and division of numbers could be performed by addition and subtraction, respectively, of logarithms of those numbers. While producing the first logarithmic tables Napier needed to perform many multiplications, and it was at this point that he designed Napier's bones, an abacus-like device used for multiplication and division.[8] Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s to allow multiplication and division operations to be carried out significantly faster than was previously possible.[9] Slide rules were used by generations of engineers and other mathematically inclined professional workers, until the invention of the pocket calculator. [10]

Yazu Arithmometer. Patented in Japan in 1903. Note the lever for turning the gears of the calculator.
German polymath Wilhelm Schickard built the first digital mechanical calculator in 1623, and thus became the father of the computing era.[11] Since his calculator used techniques such as cogs and gears first developed for clocks, it was also called a 'calculating clock'. It was put to practical use by his friend Johannes Kepler, who revolutionized astronomy when he condensed decades of astronomical observations into algebraic expressions. An original calculator by Blaise Pascal (1640) is preserved in the Zwinger Museum. Machines by Pascal (the Pascaline, 1642) and Gottfried Wilhelm von Leibniz (the Stepped Reckoner, c. 1672) followed. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used."[12]
Around 1820, Charles Xavier Thomas created the first successful, mass-produced mechanical calculator, the Thomas Arithmometer, that could add, subtract, multiply, and divide.[13] It was mainly based on Leibniz' work. Mechanical calculators, like the base-ten addiator, the comptometer, the Monroe, the Curta and the Addo-X remained in use until the 1970s. Leibniz also described the binary numeral system,[14] a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of the 1800s and even ENIAC of 1945) were based on the decimal system;[15] ENIAC's ring counters emulated the operation of the digit wheels of a mechanical adding machine.
In Japan, Ryoichi Yazu patented a mechanical calculator called the Yazu Arithmometer in 1903. It consisted of a single cylinder and 22 gears, and employed the mixed base-2 and base-5 number system familiar to users to the soroban (Japanese abacus). Carry and end of calculation were determined automatically.[16] More than 200 units were sold, mainly to government agencies such as the Ministry of War and agricultural experiment stations. [17][18]
[edit] 1801: punched card technology
Main article: Analytical engine. See also: Logic piano

Punched card system of a music machine, also referred to as Book music
In 1801, Joseph-Marie Jacquard developed a loom in which the pattern being woven was controlled by punched cards. The series of cards could be changed without changing the mechanical design of the loom. This was a landmark point in programmability.
In 1833, Charles Babbage moved on from developing his difference engine to developing a more complete design, the analytical engine, which would draw directly on Jacquard's punched cards for its programming.[19] In 1835, Babbage described his analytical engine. It was the plan of a general-purpose programmable computer, employing punch cards for input and a steam engine for power, using the positions of gears and shafts to represent numbers. His initial idea was to use punch-cards to control a machine that could calculate and print logarithmic tables with huge precision (a specific purpose machine). Babbage's idea soon developed into a general-purpose programmable computer, his analytical engine. While his design was sound and the plans were probably correct, or at least debuggable, the project was slowed by various problems. Babbage was a difficult man to work with and argued with anyone who didn't respect his ideas. All the parts for his machine had to be made by hand. Small errors in each item can sometimes sum up to large discrepancies in a machine with thousands of parts, which required these parts to be much better than the usual tolerances needed at the time. The project dissolved in disputes with the artisan who built parts and was ended with the depletion of government funding. Ada Lovelace, Lord Byron's daughter, translated and added notes to the "Sketch of the Analytical Engine" by Federico Luigi, Conte Menabrea.[20]

IBM 407 tabulating machine, (1961)
A reconstruction of the Difference Engine II, an earlier, more limited design, has been operational since 1991 at the London Science Museum. With a few trivial changes, it works as Babbage designed it and shows that Babbage was right in theory. The museum used computer-operated machine tools to construct the necessary parts, following tolerances which a machinist of the period would have been able to achieve. The failure of Babbage to complete the engine can be chiefly attributed to difficulties not only related to politics and financing, but also to his desire to develop an increasingly sophisticated computer.
Following in the footsteps of Babbage, although unaware of his earlier work, was Percy Ludgate, an accountant from Dublin, Ireland. He independently designed a programmable mechanical computer, which he described in a work that was published in 1909.
In the late 1880s, the American Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media had been for control (automatons such as piano rolls or looms), not data. "After some initial trials with paper tape, he settled on punched cards…"[21] Hollerith came to use punched cards after observing how railroad conductors encoded personal characteristics of each passenger with punches on their tickets. To process these punched cards he invented the tabulator, and the key punch machines. These three inventions were the foundation of the modern information processing industry. His machines used mechanical relays (and solenoids) to increment mechanical counters. Hollerith's method was used in the 1890 United States Census and the completed results were "... finished months ahead of schedule and far under budget".[22] Hollerith's company eventually became the core of IBM. IBM developed punch card technology into a powerful tool for business data-processing and produced an extensive line of unit record equipment. By 1950, the IBM card had become ubiquitous in industry and government. The warning printed on most cards intended for circulation as documents (checks, for example), "Do not fold, spindle or mutilate," became a motto for the post-World War II era.[23]
Punched card with the extended alphabet
Leslie Comrie's articles on punched card methods and W.J. Eckert's publication of Punched Card Methods in Scientific Computation in 1940, described techniques which were sufficiently advanced to solve differential equations[24] or perform multiplication and division using floating point representations, all on punched cards and unit record machines. In the image of the tabulator (see left), note the patch panel, which is visible on the right side of the tabulator. A row of toggle switches is above the patch panel. The Thomas J. Watson Astronomical Computing Bureau, Columbia University performed astronomical calculations representing the state of the art in computing.[25]
Computer programming in the punch card era revolved around the computer center. The computer users, for example, science and engineering students at universities, would submit their programming assignments to their local computer center in the form of a stack of cards, one card per program line. They then had to wait for the program to be queued for processing, compiled, and executed. In due course a printout of any results, marked with the submitter's identification, would be placed in an output tray outside the computer center. In many cases these results would comprise solely a printout of error messages, necessitating another edit-compile-run cycle.[26] Punched cards are still used and manufactured to this day, and their distinctive dimensions (and 80-column capacity) can still be recognized in forms, records, and programs around the world.
[edit] Desktop calculators
Main article: Calculator

The Curta calculator can also do multiplication and division
By the 1900s, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to people who used these calculators to perform mathematical calculations. By the 1920s Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier-Stokes equations.[27]
Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. During the Manhattan project, future Nobel laureate Richard Feynman was the supervisor of the roomful of human computers, many of them female mathematicians, who understood the use of differential equations which were being solved for the war effort.
In 1948, the Curta was introduced. This was a small, portable, mechanical calculator that was about the size of a pepper grinder. Over time, during the 1950s and 1960s a variety of different brands of mechanical calculators appeared on the market. The first all-electronic desktop calculator was the British ANITA Mk.VII, which used a Nixie tube display and 177 subminiature thyratron tubes. In June 1963, Friden introduced the four-function EC-130. It had an all-transistor design, 13-digit capacity on a 5-inch (130 mm) CRT, and introduced Reverse Polish notation (RPN) to the calculator market at a price of $2200. The EC-132 model added square root and reciprocal functions. In 1965, Wang Laboratories produced the LOCI-2, a 10-digit transistorized desktop calculator that used a Nixie tube display and could compute logarithms.
[edit] Advanced analog computers
Main article: analog computer

Cambridge differential analyzer, 1938
Before World War II, mechanical and electrical analog computers were considered the "state of the art", and many thought they were the future of computing. Analog computers take advantage of the strong similarities between the mathematics of small-scale properties—the position and motion of wheels or the voltage and current of electronic components—and the mathematics of other physical phenomena, for example, ballistic trajectories, inertia, resonance, energy transfer, momentum, and so forth. They model physical phenomena with electrical voltages and currents[28] as the analog quantities.
Centrally, these analog systems work by creating electrical analogs of other systems, allowing users to predict behavior of the systems of interest by observing the electrical analogs.[29] The most useful of the analogies was the way the small-scale behavior could be represented with integral and differential equations, and could be thus used to solve those equations. An ingenious example of such a machine, using water as the analog quantity, was the water integrator built in 1928; an electrical example is the Mallock machine built in 1941. A planimeter is a device which does integrals, using distance as the analog quantity. Unlike modern digital computers, analog computers are not very flexible, and need to be rewired manually to switch them from working on one problem to another. Analog computers had an advantage over early digital computers in that they could be used to solve complex problems using behavioral analogues while the earliest attempts at digital computers were quite limited.
Some of the most widely deployed analog computers included devices for aiming weapons, such as the Norden bombsight[30] and the fire-control systems,[31] such as Arthur Pollen's Argo system for naval vessels. Some stayed in use for decades after WWII; the Mark I Fire Control Computer was deployed by the United States Navy on a variety of ships from destroyers to battleships. Other analog computers included the Heathkit EC-1, and the hydraulic MONIAC Computer which modeled econometric flows.[32]
The art of analog computing reached its zenith with the differential analyzer,[33] invented in 1876 by James Thomson and built by H. W. Nieman and Vannevar Bush at MIT starting in 1927. Fewer than a dozen of these devices were ever built; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built. Digital electronic computers like the ENIAC spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications. But like all digital devices, the decimal precision of a digital device is a limitation, as compared to an analog device, in which the accuracy is a limitation.[34] As electronics progressed during the twentieth century, its problems of operation at low voltages while maintaining high signal-to-noise ratios[35] were steadily addressed, as shown below, for a digital circuit is a specialized form of analog circuit, intended to operate at standardized settings (continuing in the same vein, logic gates can be realized as forms of digital circuits). But as digital computers have become faster and use larger memory (for example, RAM or internal storage), they have almost entirely displaced analog computers. Computer programming, or coding, has arisen as another human profession.
[edit] Digital computation

Punched tape programs would be much longer than the short fragment of yellow paper tape shown.
The era of modern computing began with a flurry of development before and during World War II, as electronic circuit elements replaced mechanical equivalents, and digital calculations replaced analog calculations. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium. Defining a single point in the series as the "first computer" misses many subtleties (see the table "Defining characteristics of some early digital computers of the 1940s" below).
Alan Turing's 1936 paper[36] proved enormously influential in computing and computer science in two ways. Its main purpose was to prove that there were problems (namely the halting problem) that could not be solved by any sequential process. In doing so, Turing provided a definition of a universal computer which executes a program stored on tape. This construct came to be called a Turing machine. [37] Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Nine-track magnetic tape
For a computing machine to be a practical general-purpose computer, there must be some convenient read-write mechanism, punched tape, for example. With a knowledge of Alan Turing's theoretical 'universal computing machine' John von Neumann defined an architecture which uses the same memory both to store programs and data: virtually all contemporary computers use this architecture (or some variant). While it is theoretically possible to implement a full computer entirely mechanically (as Babbage's design showed), electronics made possible the speed and later the miniaturization that characterize modern computers.
There were three parallel streams of computer development in the World War II era; the first stream largely ignored, and the second stream deliberately kept secret. The first was the German work of Konrad Zuse. The second was the secret development of the Colossus computers in the UK. Neither of these had much influence on the various computing projects in the United States. The third stream of computer development, Eckert and Mauchly's ENIAC and EDVAC, was widely publicized.[38][39]
George Stibitz is internationally recognized as one of the fathers of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator that he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to calculate using binary form. [40]
[edit] Zuse
Main article: Konrad Zuse

A reproduction of Zuse's Z1 computer
Working in isolation in Germany, Konrad Zuse started construction in 1936 of his first Z-series calculators featuring memory and (initially limited) programmability. Zuse's purely mechanical, but already binary Z1, finished in 1938, never worked reliably due to problems with the precision of parts.
Zuse's later machine, the Z3,[41] was finished in 1941. It was based on telephone relays and did work satisfactorily. The Z3 thus became the first functional program-controlled, all-purpose, digital computer. In many ways it was quite similar to modern machines, pioneering numerous advances, such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.
Programs were fed into Z3 on punched films. Conditional jumps were missing, but since the 1990s it has been proved theoretically that Z3 was still a universal computer (ignoring its physical storage size limitations). In two 1936 patent applications, Konrad Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in the British SSEM of 1948.[42] Zuse also claimed to have designed the first higher-level programming language, (Plankalkül), in 1945 (published in 1948) although it was implemented for the first time in 2000 by a team around Raúl Rojas at the Free University of Berlin—five years after Zuse died.
Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.
[edit] Colossus
Main article: Colossus computer



EXAMAPERS123.BLOGSPOT.COM


Colossus was used to break German ciphers during World War II.
During World War II, the British at Bletchley Park (40 miles north of London) achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was attacked with the help of electro-mechanical machines called bombes. The bombe, designed by Alan Turing and Gordon Welchman, after the Polish cryptographic bomba by Marian Rejewski (1938), came into use in 1941.[43] They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.
The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, termed "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Professor Max Newman and his colleagues helped specify the Colossus.[44] The Mk I Colossus was built between March and December 1943 by Tommy Flowers and his colleagues at the Post Office Research Station at Dollis Hill in London and then shipped to Bletchley Park in January 1944.
Colossus was the first totally electronic computing device. The Colossus used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand. Due to this secrecy the Colossi were not included in many histories of computing. A reconstructed copy of one of the Colossus machines is now on display at Bletchley Park.
[edit] American developments
In 1937, Claude Shannon showed there is a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers.[45] In his master's thesis[46] at MIT, for the first time in history, Shannon showed that electronic relays and switches can realize the expressions of Boolean algebra. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design. George Stibitz completed a relay-based computer he dubbed the "Model K" at Bell Labs in November 1937. Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Calculator,[47] completed January 8, 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on September 11, 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely, in this case over a phone line. Some participants in the conference who witnessed the demonstration were John von Neumann, John Mauchly, and Norbert Wiener, who wrote about it in their memoirs.

Atanasoff–Berry Computer replica at 1st floor of Durham Center, Iowa State University
In 1939, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC),[48] The Atanasoff-Berry Computer was the world's first electronic digital computer[49]. The design used over 300 vacuum tubes and employed capacitors fixed in a mechanically rotating drum for memory. Though the ABC machine was not programmable, it was the first to use electronic tubes in an adder. ENIAC co-inventor John Mauchly examined the ABC in June 1941, and its influence on the design of the later ENIAC machine is a matter of contention among computer historians. The ABC was largely forgotten until it became the focus of the lawsuit Honeywell v. Sperry Rand, the ruling of which invalidated the ENIAC patent (and several others) as, among many reasons, having been anticipated by Atanasoff's work.
In 1939, development began at IBM's Endicott laboratories on the Harvard Mark I. Known officially as the Automatic Sequence Controlled Calculator,[50] the Mark I was a general purpose electro-mechanical computer built with IBM financing and with assistance from IBM personnel, under the direction of Harvard mathematician Howard Aiken. Its design was influenced by Babbage's Analytical Engine, using decimal arithmetic and storage wheels and rotary switches in addition to electromagnetic relays. It was programmable via punched paper tape, and contained several calculation units working in parallel. Later versions contained several paper tape readers and the machine could switch between readers based on a condition. Nevertheless, the machine was not quite Turing-complete. The Mark I was moved to Harvard University and began operation in May 1944.
[edit] ENIAC
Main article: ENIAC



EXAMAPERS123.BLOGSPOT.COM


ENIAC performed ballistics trajectory calculations with 160 kW of power
The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic general-purpose computer. It combined, for the first time, the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. (Colossus couldn't add). It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, and contained over 18,000 valves. One of the major engineering feats was to minimize valve burnout, which was a common problem at that time. The machine was in almost constant use for the next ten years.
ENIAC was unambiguously a Turing-complete device. It could compute any problem (that would fit in memory). A "program" on the ENIAC, however, was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that evolved from it. Once a program was written, it had to be mechanically set into the machine. Six women did most of the programming of ENIAC. (Improvements completed in 1948 made it possible to execute stored programs set in function table memory, which made programming less a "one-off" effort, and more systematic).
Defining characteristics of some early digital computers of the 1940s (In the history of computing hardware)
Name First operational Numeral system Computing mechanism Programming
Turing complete

Zuse Z3 (Germany) May 1941 Binary
Electro-mechanical
Program-controlled by punched film stock (but no conditional branch) Yes (1998)

Atanasoff–Berry Computer (US) 1942 Binary Electronic
Not programmable—single purpose No
Colossus Mark 1 (UK) February 1944 Binary Electronic Program-controlled by patch cables and switches No
Harvard Mark I – IBM ASCC (US) May 1944 Decimal
Electro-mechanical Program-controlled by 24-channel punched paper tape (but no conditional branch) No
Colossus Mark 2 (UK) June 1944 Binary Electronic Program-controlled by patch cables and switches No
ENIAC (US)
July 1946 Decimal Electronic Program-controlled by patch cables and switches Yes
Manchester Small-Scale Experimental Machine (UK)
June 1948 Binary Electronic Stored-program in Williams cathode ray tube memory
Yes
Modified ENIAC (US)
September 1948 Decimal Electronic Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM
Yes
EDSAC (UK)
May 1949 Binary Electronic Stored-program in mercury delay line memory
Yes
Manchester Mark 1 (UK)
October 1949 Binary Electronic Stored-program in Williams cathode ray tube memory and magnetic drum memory Yes
CSIRAC (Australia) November 1949 Binary Electronic Stored-program in mercury delay line memory Yes
[edit] First-generation machines
Further information: List of vacuum tube computers
Design of the von Neumann architecture (1947)
Even before the ENIAC was finished, Eckert and Mauchly recognized its limitations and started the design of a stored-program computer, EDVAC. John von Neumann was credited with a widely circulated report describing the EDVAC design in which both the programs and working data were stored in a single, unified store. This basic design, denoted the von Neumann architecture, would serve as the foundation for the worldwide development of ENIAC's successors.[51] In this generation of equipment, temporary or working storage was provided by acoustic delay lines, which used the propagation time of sound through a medium such as liquid mercury (or through a wire) to briefly store data. A series of acoustic pulses is sent along a tube; after a time, as the pulse reached the end of the tube, the circuitry detected whether the pulse represented a 1 or 0 and caused the oscillator to re-send the pulse. Others used Williams tubes, which use the ability of a television picture tube to store and retrieve data. By 1954, magnetic core memory[52] was rapidly displacing most other forms of temporary storage, and dominated the field through the mid-1970s.

Magnetic core memory. Each core is one bit.
EDVAC was the first stored-program computer designed; however it was not the first to run. Eckert and Mauchly left the project and its construction floundered. The first working von Neumann machine was the Manchester "Baby" or Small-Scale Experimental Machine, developed by Frederic C. Williams and Tom Kilburn at the University of Manchester in 1948;[53] it was followed in 1949 by the Manchester Mark 1 computer, a complete system, using Williams tube and magnetic drum memory, and introducing index registers.[54] The other contender for the title "first digital stored-program computer" had been EDSAC, designed and constructed at the University of Cambridge. Operational less than one year after the Manchester "Baby", it was also capable of tackling real problems. EDSAC was actually inspired by plans for EDVAC (Electronic Discrete Variable Automatic Computer), the successor to ENIAC; these plans were already in place by the time ENIAC was successfully operational. Unlike ENIAC, which used parallel processing, EDVAC used a single processing unit. This design was simpler and was the first to be implemented in each succeeding wave of miniaturization, and increased reliability. Some view Manchester Mark 1 / EDSAC / EDVAC as the "Eves" from which nearly all current computers derive their architecture. Manchester University's machine became the prototype for the Ferranti Mark 1. The first Ferranti Mark 1 machine was delivered to the University in February, 1951 and at least nine others were sold between 1951 and 1957.
The first universal programmable computer in the Soviet Union was created by a team of scientists under direction of Sergei Alekseyevich Lebedev from Kiev Institute of Electrotechnology, Soviet Union (now Ukraine). The computer MESM (МЭСМ, Small Electronic Calculating Machine) became operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of power. It could perform approximately 3,000 operations per second. Another early machine was CSIRAC, an Australian design that ran its first test program in 1949. CSIRAC is the oldest computer still in existence and the first to have been used to play digital music.[55]
[edit] Commercial computers
The first commercial computer was the Ferranti Mark 1, which was delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves).[56] A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of the these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[57]
In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [58] and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO (Lyons Electronic Office). This was the first business application to go live on a stored program computer.[59]
In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than $1 million each ($8.2 million as of 2010).[60] UNIVAC was the first "mass produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words). A key feature of the UNIVAC system was a newly invented type of metal magnetic tape, and a high-speed tape unit, for non-volatile storage. Magnetic media are still used in many computers.[61] In 1952, IBM publicly announced the IBM 701 Electronic Data Processing Machine, the first in its successful 700/7000 series and its first IBM mainframe computer. The IBM 704, introduced in 1954, used magnetic core memory, which became the standard for large machines. The first implemented high-level general purpose programming language, Fortran, was also being developed at IBM for the 704 during 1955 and 1956 and released in early 1957. (Konrad Zuse's 1945 design of the high-level language Plankalkül was not implemented at that time.) A volunteer user group, which exists to this day, was founded in 1955 to share their software and experiences with the IBM 701.



EXAMAPERS123.BLOGSPOT.COM


IBM 650 front panel
IBM introduced a smaller, more affordable computer in 1954 that proved very popular.[62] The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. It cost $500,000 ($3.96 million as of 2010) or could be leased for $3,500 a month ($30 thousand as of 2010).[60] Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. Efficient execution using drum memory was provided by a combination of hardware architecture: the instruction format included the address of the next instruction; and software: the Symbolic Optimal Assembly Program, SOAP[63], assigned instructions to optimal address (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was not required.
In 1955, Maurice Wilkes invented microprogramming,[64] which allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode).[65] It was widely used in the CPUs and floating-point units of mainframe and other computers, such as the IBM 360 series.[66]
IBM introduced its first magnetic disk system, RAMAC (Random Access Method of Accounting and Control) in 1956. Using fifty 24-inch (610 mm) metal disks, with 100 tracks per side, it was able to store 5 megabytes of data at a cost of $10,000 per megabyte ($80 thousand as of 2010).[60][67]
[edit] Second generation: transistors
Main article: Transistor computer
Further information: List of transistorized computers



EXAMAPERS123.BLOGSPOT.COM


A bipolar junction transistor
The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs,[68] giving rise to the "second generation" of computers. Initially the only devices available were germanium point-contact transistors, which although less reliable than the vacuum tubes they replaced had the advantage of consuming far less power.[69] The first transistorised computer was built at the University of Manchester and was operational by 1953;[70] a second version was completed there in April 1955. The later machine used 200 transistors and 1,300 solid-state diodes and had a power consumption of 150 watts. However, it still required valves to generate the clock waveforms at 125 kHz and to read and write on the magnetic drum memory, whereas the Harwell CADET operated without any valves by using a lower clock frequency, of 58 kHz when it became operational in February 1955.[71] Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available.[72]
Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System[73] each carrying one to four logic gates or flip-flops.
A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than one hundred thousand 1401s between 1960 and 1964.

This RAMAC DASD is being restored at the Computer History Museum
Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The IBM 350 RAMAC was introduced in 1956 and was the world's first disk drive. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk stack can be easily exchanged with another stack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks,' their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.
Many second generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.
During the second generation remote terminal units (often in the form of teletype machines like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the Internet.[74]
[edit] Post-1960: third generation and beyond
Main articles: history of computing hardware (1960s–present) and history of general purpose CPUs



EXAMAPERS123.BLOGSPOT.COM


Intel 8742 eight-bit microcontroller IC
The explosion in the use of computers began with "third-generation" computers, making use of Jack St. Clair Kilby's[75] and Robert Noyce's[76] independent invention of the integrated circuit (or microchip), which later led to the invention of the microprocessor,[77] by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[78] The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
During the 1960s there was considerable overlap between second and third generation technologies.[79] IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities.[80] It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of the microcomputer, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond. Steve Wozniak, co-founder of Apple Computer, is sometimes erroneously credited with developing the first mass-market home computers. However, his first computer, the Apple I, came out some time after the MOS Technology KIM-1 and Altair 8800, and the first Apple computer with graphic and sound capabilities came out well after the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments.
Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform.[81] Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.[82]
In the twenty-first century, multi-core CPUs became commercially available.[83] Content-addressable memory (CAM)[84] has become inexpensive enough to be used in networking, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. When the CMOS field effect transistor-based logic gates supplanted bipolar transistors, computer power consumption could decrease dramatically (A CMOS field-effect transistor only draws significant current during the 'transition' between logic states, unlike the substantially higher (and continuous) bias current draw of a BJT). This has allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. Computing hardware and its software have even become a metaphor for the operation of the universe.[85] Although DNA-based computing and quantum qubit computing are years or decades in the future, the infrastructure is being laid today, for example, with DNA origami on photolithography.[86]
An indication of the rapidity of development of this field can be inferred by the history of the seminal article.[87] By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the pace of development has continued, worldwide.[88][89]
[edit] See also
Wikiversity has learning materials about Introduction to Computers/History

• History of computing
• Timeline of computing
• IT History Society
• Information Age
• The Secret Guide to Computers (book)
• List of vacuum tube computers
• List of transistorized computers
[edit] Notes
1. ^ computer, n., Oxford English Dictionary (2 ed.), Oxford University Press, 1989, http://dictionary.oed.com/, retrieved 2009-04-10
2. ^ According to Schmandt-Besserat 1981, these clay containers contained tokens, the total of which were the count of objects being transferred. The containers thus served as a bill of lading or an accounts book. In order to avoid breaking open the containers, marks were placed on the outside of the containers, for the count. Eventually (Schmandt-Besserat estimates it took 4000 years) the marks on the outside of the containers were all that were needed to convey the count, and the clay containers evolved into clay tablets with marks for the count.
3. ^ Eleanor Robson (2008), Mathematics in Ancient Iraq ISBN 978-0-691-09182-2 p.5: these calculi were in use in Iraq for primitive accounting systems as early as 3200-3000 BCE, with commodity-specific number systems. Balanced accounting was in use by 3000-2350 BCE, and a sexagesimal number system was in use 2350-2000 BCE.
4. ^ Lazos 1994
5. ^ a b Ancient Discoveries, Episode 11: Ancient Robots, History Channel, http://www.youtube.com/watch?v=rxjbaQl0ad8, retrieved 2008-09-06
6. ^ Howard R. Turner (1997), Science in Medieval Islam: An Illustrated Introduction, p. 184, University of Texas Press, ISBN 0292781490
7. ^ Donald Routledge Hill, "Mechanical Engineering in the Medieval Near East", Scientific American, May 1991, pp. 64–9 (cf. Donald Routledge Hill, Mechanical Engineering)
8. ^ A Spanish implementation of Napier's bones (1617), is documented in Montaner & Simon 1887, pp. 19–20.
9. ^ Kells, Kern & Bland 1943, p. 92
10. ^ Kells, Kern & Bland 1943, p. 82.
11. ^ Schmidhuber
12. ^ As quoted in Smith 1929, pp. 180–181
13. ^ Discovering the Arithmometer, Cornell University
14. ^ Leibniz 1703
15. ^ Binary-coded decimal (BCD) is a numeric representation, or character encoding, which is still extant.
16. ^ Yamada, Akihiko ([dead link]), Biquinary mechanical calculating machine,“Jido-Soroban” (automatic abacus), built by Ryoichi Yazu, National Science Museum of Japan, p. 8, http://sts.kahaku.go.jp/temp/5.pdf
17. ^ The History of Japanese Mechanical Calculating Machines
18. ^ Mechanical Calculator, "JIDOSOROBAN", The Japan Society of Mechanical Engineers (in Japanese)
19. ^ Jones
20. ^ Menabrea & Lovelace 1843
21. ^ Columbia University Computing History — Herman Hollerith
22. ^ U.S. Census Bureau: Tabulation and Processing
23. ^ Lubar 1991
24. ^ Eckert 1935
25. ^ Eckert 1940, pp. 101=114. Chapter XII is "The Computation of Planetary Pertubations".
26. ^ Fisk 2005
27. ^ Hunt 1998, pp. xiii-xxxvi
28. ^ Chua 1971, pp. 507–519
29. ^ See, for example, Horowitz & Hill 1989, pp. 1–44
30. ^ Norden
31. ^ Singer 1946
32. ^ Phillips
33. ^ (French) Coriolis 1836, pp. 5–9
34. ^ The noise level, compared to the signal level, is a fundamental factor, see for example Davenport & Root 1958, pp. 112–364.
35. ^ Ziemer, Tranter & Fannin 1993, p. 370.
36. ^ Turing 1937, pp. 230–265. Online versions: Proceedings of the London Mathematical Society Another version online.
37. ^ Kurt Gödel (1964), p. 71, "Postscriptum" in Martin Davis (ed., 2004),The Undecidable Fundamental papers by papers by Gödel, Church, Turing, and Post on this topic and the relationship to computability. ISBN 0486432289, as summarized in Church-Turing thesis.
38. ^ Moye 1996
39. ^ Bergin 1996
40. ^ Inventor Profile: George R. Stibitz, National Inventors Hall of Fame Foundation, Inc., http://www.invent.org/hall_of_fame/140.html
41. ^ Zuse
42. ^ "Electronic Digital Computers", Nature 162: 487, 25 September 1948, http://www.computer50.org/kgill/mark1/natletter.html, retrieved 2009-04-10
43. ^ Welchman 1984, pp. 138–145, 295–309
44. ^ Copeland 2006.
45. ^ Claude Shannon, "A Symbolic Analysis of Relay and Switching Circuits", Transactions of the American Institute of Electrical Engineers, Vol. 57,(1938), pp. 713-723
46. ^ Shannon 1940
47. ^ George Stibitz, US patent 2668661, "Complex Computer", granted 1954-02-09 , assigned to AT&T, 102 pages.
48. ^ January 15, 1941 notice in the Des Moines Register.
49. ^ The First Electronic Computer By Arthur W. Burks
50. ^ Da Cruz 2008
51. ^ von Neumann 1945, p. 1. The title page, as submitted by Goldstine, reads: "First Draft of a Report on the EDVAC by John von Neumann, Contract No. W-670-ORD-4926, Between the United States Army Ordnance Department and the University of Pennsylvania Moore School of Electrical Engineering".
52. ^ An Wang filed October 1949, US patent 2708722, "Pulse transfer controlling devices", granted 1955-05-17 .
53. ^ Enticknap 1998, p. 1; Baby's 'first good run' was June 21, 1948.
54. ^ Manchester 1998, by R.B.E. Napper, et al.
55. ^ CSIRAC 2005
56. ^ Lavington 1998, p. 25
57. ^ Computer Conservation Society, Our Computer Heritage Pilot Study: Deliveries of Ferranti Mark I and Mark I Star computers., http://www.ourcomputerheritage.org/wp/, retrieved 9 January 2010
58. ^ Lavington, Simon. "A brief history of British computers: the first 25 years (1948 - 1973).". British Computer Society. http://www.bcs.org/server.php?. Retrieved 10 January 2010.
59. ^ Martin 2008, p. 24 notes that David Caminer (1915–2008) served as the first corporate electronic systems analyst, for this first business computer system, a Leo computer, part of J. Lyons & Company. LEO would calculate an employee's pay, handle billing, and other office automation tasks.
60. ^ a b c "Consumer Price Index (estimate) 1800–2008". Federal Reserve Bank of Minneapolis. http://www.minneapolisfed.org/community_education/teacher/calc/hist1800.cfm. Retrieved August 1, 2009.
61. ^ Magnetic tape will be the primary data storage mechanism when CERN's Large Hadron Collider comes online in 2008.
62. ^ For example, Kara Platoni's article on Donald Knuth stated that "there was something special about the IBM 650", Stanford Magazine, May/June 2006
63. ^ IBM (1957) (PDF), SOAP II for the IBM 650, C24-4000-0, http://www.bitsavers.org/pdf/ibm/650/24-4000-0_SOAPII.pdf
64. ^ Wilkes 1986, pp. 115–126
65. ^ Horowitz & Hill 1989, p. 743
66. ^ Patterson & Hennessy 1998, p. 424
67. ^ IBM 1956
68. ^ Feynman, Leighton & Sands 1965, pp. III 14-11 to 14-12
69. ^ Lavington 1998, pp. 34–35
70. ^ Lavington 1998, p. 37
71. ^ Cooke-Yarborough, E.H. (June 1998), "Some early transistor applications in the UK.", Engineering and Science Education Journal (London, UK: IEE) 7 (3): 100–106, doi:10.1049/esej:19980301, ISSN 0963-7346, http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=00689507, retrieved 2009-06-07
72. ^ Lavington 1998, pp. 36–37
73. ^ IBM_SMS 1960
74. ^ Mayo & Newcomb 2008, pp. 96–117; Jimbo Wales is quoted on p. 115.
75. ^ Kilby 2000
76. ^ Robert Noyce's Unitary circuit, US patent 2981877, "Semiconductor device-and-lead structure", granted 1961-04-25 , assigned to Fairchild Semiconductor Corporation.
77. ^ Intel_4004 1971
78. ^ The Intel 4004 (1971) die was 12mm2, composed of 2300 transistors; by comparison, the Pentium Pro was 306mm2, composed of 5.5 million transistors, according to Patterson & Hennessy 1998, pp. 27–39
79. ^ In the defense field, considerable work was done in the computerized implementation of equations such as Kalman 1960, pp. 35–45
80. ^ Eckhouse & Morris 1979, pp. 1–2
81. ^ "Since 2005, its [Google's] data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts." — Ben Jai of Google, as quoted in Shankland 2009
82. ^ "If you're running 10,000 machines, something is going to die every day." —Jeff Dean of Google, as quoted in Shankland 2008.
83. ^ Intel has unveiled a single-chip version of a 48-core CPU for software and circuit research in cloud computing: accessdate=2009-12-02. Intel has loaded Linux on each core; each core has an X86 architecture: accessdate=2009-12-3
84. ^ Kohonen 1980, pp. 1–368
85. ^ Smolin 2001, pp. 53–57. Pages 220–226 are annotated references and guide for further reading.
86. ^ Ryan J. Kershner, Luisa D. Bozano, Christine M. Micheel, Albert M. Hung, Ann R. Fornof, Jennifer N. Cha, Charles T. Rettner, Marco Bersani, Jane Frommer, Paul W. K. Rothemund & Gregory M. Wallraff (16 August 2009) "Placement and orientation of individual DNA shapes on lithographically patterned surfaces" Nature Nanotechnology publication information, supplementary information: DNA origami on photolithography doi:10.1038/nnano.2009.220
87. ^ Burks, Goldstine & von Neumann 1947, pp. 1–464 reprinted in Datamation, September-October 1962. Note that preliminary discussion/design was the term later called system analysis/design, and even later, called system architecture.
88. ^ IEEE_Annals 1979 Online access to the IEEE Annals of the History of Computing here. DBLP summarizes the Annals of the History of Computing year by year, back to 1996, so far.



EXAMAPERS123.BLOGSPOT.COM

Communication process, Verbal and Non-Verbal communications





'Communication process, Verbal and Non-Verbal communications:






SUCCESS FOR CAREER

Communication process, Verbal and Non-Verbal communications
Communication process, Verbal and Non-Verbal communications:
Communication is the transfer of words and ideas between two or more peoples. It is composed of different methods like words,voice and some Non-verbal clues.
According to Research words are 7% effected ,Tone of Voice is 38% effected and Non-Verbal clues are 55% effected.
Somebody has correctly said “If Speech is silver, Silence is golden”. In other words what you say is not as important as How you say.
A Dull Message is delivered by charismatic person filled with energy and enthusiasm will be accepted as brilliant where as an excellent message is delivered by some one who is not interesting in the topic will not engage interest of audience.
Elements of communication :
They are two elements of communication
1) Speaking
2) Listening
1) Speaking :
Speaking is the process by which you utter short sentences. Further ,the elements of speaking are
1) Body Language
2) Voice Quality
3) Intension
4) Manner
5) Dressing Style
6) Visual Aids,Animation
7) Eye contact
8) Energy
9) Sensitivity
10) Rhythm
11) Purpose
12) Clarity

2) Listening :
Listening is understanding the perspective of speaker.
Further, The elements of listening are
1) Attentiveness to speaker
2) Eye contact
3) Awareness
4) Openness
5) Listening to yourself
6) Body Language
7) Willingness

We must learn to listen better to speak more clearly .We must also check that our message is delivered correctly and clearly.

Listening is the key element of communication occurs when we listen with understanding i.e to see the altitude from the speakers point of you.

By practicing, you are listening skills a person can develop speaking skills.

Listening is not the same as hearing. Listening means understanding what to speakers want to say.

Listening + speaking = conversation

Wednesday, October 27, 2010

System Calls and System Programs



System Calls and System Programs



System Calls and System Programs

System calls provide an interface between the process an the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel mode, where it has access to privileged instructions, and can perform the desired service on the behalf of user-level process. It is because of the critical nature of operations that the operating system itself does them every time they are needed. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.





EXAMAPERS123.BLOGSPOT.COM

Interprocess Communication



Interprocess CommunicatioN




Interprocess Communication

Since processes frequently needs to communicate with other processes therefore, there is a need for a well-structured communication, without using interrupts, among processes.

Race Conditions
In operating systems, processes that are working together share some common storage (main memory, file etc.) that each process can read and write. When two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions. Concurrently executing threads that share data need to synchronize their operations and processing in order to avoid race condition on shared data. Only one ‘customer’ thread at a time should be allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is implemented as a linked list and if the ready queue is being manipulated during the handling of an interrupt, then interrupts must be disabled to prevent another interrupt before the first one completes. If interrupts are not disabled than the linked list could become corrupt.


Critical Section
How to avoid race conditions?


The key to preventing trouble involving shared storage is find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes in Critical Sections in each thread. The characteristic properties of the code that form a Critical Section are
• Codes that reference one or more variables in a “read-update-write” fashion while any of those variables is possibly being altered by another thread.
• Codes that alter one or more variables that are possibly being referenced in “read-updata-write” fashion by another thread.
• Codes use a data structure while any part of it is possibly being altered by another thread.
• Codes alter any part of a data structure while it is possibly in use by another thread.

Here, the important point is that when one process is executing shared modifiable data in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time.


Mutual Exclusion


A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing.
Formally, while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.
Note that mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently.


Mutual Exclusion Conditions
If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we could avoid race conditions. We need four conditions to hold to have a good solution for the critical section problem (mutual exclusion).
• No two processes may at the same moment inside their critical sections.
• No assumptions are made about relative speeds of processes or number of CPUs.
• No process should outside its critical section should block other processes.
• No process should wait arbitrary long to enter its critical section.


Proposals for Achieving Mutual Exclusion

The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-protocol (or exist protocol) to keep two or more threads from being in their critical sections at the same time. Tanenbaum examine proposals for critical-section problem or mutual exclusion problem.
Problem
When one process is updating shared modifiable data in its critical section, no other process should allowed to enter in its critical section.


Proposal 1 -Disabling Interrupts (Hardware Solution)
Each process disables all interrupts just after entering in its critical section and re-enable all interrupts just before leaving critical section. With interrupts turned off the CPU could not be switched to other process. Hence, no other process will enter its critical and mutual exclusion achieved.
Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful technique within the kernel of an operating system, but it is not appropriate as a general mutual exclusion mechanism for users process. The reason is that it is unwise to give user process the power to turn off interrupts.


Proposal 2 - Lock Variable (Software Solution)
In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses - some process is in its critical section.
Conclusion
The flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1. When the process A runs again, it will also set the lock to 1, and two processes will be in their critical section simultaneously.


Proposal 3 - Strict Alteration
In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the critical section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section. Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes 1.Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.
Conclusion
Taking turns is not a good idea when one of the processes is much slower than the other. Suppose process 0 finishes its critical section quickly, so both processes are now in their noncritical section. This situation violates above mentioned condition 3.
Using Systems calls 'sleep' and 'wakeup'
Basically, what above mentioned solution do is this: when a processes wants to enter in its critical section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop and waits (i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.
Now look at some interprocess communication primitives is the pair of steep-wakeup.
• Sleep
o It is a system call that causes the caller to block, that is, be suspended until some other process wakes it up.
• Wakeup
o It is a system call that wakes up the process.

Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used to match up 'sleeps' and 'wakeups' .


The Bounded Buffer Producers and Consumers
The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a finite numbers of slots are available.
Statement
To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into the buffer and the consumer takes information out.
Trouble arises when
1. The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
2. The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes consumer up.
Conclusion
This approaches also leads to same race conditions we have seen in earlier approaches. Race condition can occur due to the fact that access to 'count' is unconstrained. The essence of the problem is that a wakeup call, sent to a process that is not sleeping, is lost.

Semaphores


E.W. Dijkstra (1965) abstracted the key notion of mutual exclusion in his concepts of semaphores.
Definition
A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialization operation called 'Semaphoiinitislize'.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values.

The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:
P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:
V(S): IF (one or more process are waiting on S)
THEN (let one of these processes proceed)
ELSE S := S +1

Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).
If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement.
Semaphores solve the lost-wakeup problem.
Producer-Consumer Problem Using Semaphores
The Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.
The semaphore 'full' is used for counting the number of slots in the buffer that are full. The 'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure that the producer and consumer do not access modifiable shared section of the buffer simultaneously.
Initialization
• Set full buffer slots to 0.
i.e., semaphore Full = 0.
• Set empty buffer slots to N.
i.e., semaphore empty = N.
• For control access to critical section set mutex to 1.
i.e., semaphore mutex = 1.
Producer ( )
WHILE (true)
produce-Item ( );
P (empty);
P (mutex);
enter-Item ( )
V (mutex)
V (full);
Consumer ( )
WHILE (true)
P (full)
P (mutex);
remove-Item ( );
V (mutex);
V (empty);
consume-Item (Item)








EXAMAPERS123.BLOGSPOT.COM

Definition of Process



'Definition of Process





SUCCESS FOR CAREER


Definition of Process

The notion of process is central to the understanding of operating systems. There are quite a few definitions presented in the literature, but no "perfect" definition has yet appeared.

Definition
The term "process" was first used by the designers of the MULTICS in 1960's. Since then, the term process, used somewhat interchangeably with 'task' or 'job'. The process has been given many definitions for instance
• A program in Execution.
• An asynchronous activity.
• The 'animated sprit' of a procedure in execution.
• The entity to which processors are assigned.
• The 'dispatchable' unit.
and many more definitions have given. As we can see from above that there is no universally agreed upon definition, but the definition "Program in Execution" seem to be most frequently used. And this is a concept are will use in the present study of operating systems.
Now that we agreed upon the definition of process, the question is what is the relation between process and program. It is same beast with different name or when this beast is sleeping (not executing) it is called program and when it is executing becomes process. Well, to be very precise. Process is not the same as program. In the following discussion we point out some of the difference between process and program. As we have mentioned earlier.
Process is not the same as program. A process is more than a program code. A process is an 'active' entity as oppose to program which consider to be a 'passive' entity. As we all know that a program is an algorithm expressed in some suitable notation, (e.g., programming language). Being a passive, a program is only a part of process. Process, on the other hand, includes:
• Current value of Program Counter (PC)
• Contents of the processors registers
• Value of the variables
• The process stack (SP) which typically contains temporary data such as subroutine parameter, return address, and temporary variables.
• A data section that contains global variables.
A process is the unit of work in a system.
In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes. (The rapid switching back and forth is called multiprogramming).




Process State


The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:
• Code for the program.
• Program's static data.
• Program's dynamic data.
• Program's procedure call stack.
• Contents of general purpose registers.
• Contents of program counter (PC)
• Contents of program status word (PSW).
• Operating Systems resource in use.

A process goes through a series of discrete process states.
• New State: The process being created.
• Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.
• Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.
• Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is runable but temporarily stopped running to let another process run.
• Terminated state: The process has finished execution.






Process State


The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:
• Code for the program.
• Program's static data.
• Program's dynamic data.
• Program's procedure call stack.
• Contents of general purpose registers.
• Contents of program counter (PC)
• Contents of program status word (PSW).
• Operating Systems resource in use.

A process goes through a series of discrete process states.
• New State: The process being created.
• Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.
• Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.
• Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is runable but temporarily stopped running to let another process run.
• Terminated state: The process has finished execution.







Process Control Block


A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including
• The current state of the process i.e., whether it is ready, running, waiting, or whatever.
• Unique identification of the process in order to track "which is which" information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems.





EXAMAPERS123.BLOGSPOT.COM

C LANGUAGE PROGRAMS BITS

C LANGUAGE PROGRAMS BITS
1. The recurrence relation that arises in relation with the complexity of binary search is
a. T(n)=T(n/2)+k, where k is constant
b. T(n)=2T(n/2)+k, where k is constant
c. T(n)=T(n/2)+log(n)
d. T(n)=T(n/2)+n
2. The running time T(n), where `n' is the input size of a recursive algorithm is given as followsT(n)=c+T(n-1),if n>1
d, if n≤ 1
The order of this algorithm is
a. n2
b. n
c. n3
d. nn
3. The concept of order(Big O) is important because
a. it can be used to decide the best algorithm that solves a given problem
b. It determines the minimum size of a problem that can be solved in a given system, in a given amount of time
c. It is the lower bound of the growth rate of the algorithm
d. It is the average bound of the growth rate of the algorithm
4. The concept of order(Big O) is important because
a. it can not be used to decide the best algorithm that solves a given problem
b. It determines the maximum size of a problem that can be solved in a given system, in a given amount of time
c. It is the lower bound of the growth rate of the algorithm
d. It is the average bound of the growth rate of the algorithm
5. The time complexity of an algorithm T(n), where n is the input size is given byT(n)=T(n-1)+/n, if n>1
=1 otherwise
The order of the algorithm is
a. log n
b. n
c. n2
d. nn
6. The running time of an algorithm is given byT(n)=T(n-1)+T(n-2)-T(n-3), if n>3
= n otherwise
The order of this algorithm is
a. n
b. log n
c. nn
d. n2
7. If n=4,then the value of O(log n) is
a. 1
b. 2
c. 4
d. 8
8. If n=4,then the value of O( n2) is
a. 4
b. 16
c. 64
d. 512
9. The average time complexity of insertion sort is
a. O(n2)
b. O(n)
c. O(1)
d. O(log n)
10. The running time of an algorithm is given byT(n)=T(n-1)+T(n-2)-T(n-3), if n>3
= n otherwise
What should be the relation between T(1),T(2) and T(3) so that its order is constant.
a. T(1)=T(2)=T(3)
b. T(1)+T(3)=2T(2)
c. T(1)-T(3)=T(2)
d. T(1)+T(2)=T(3)
11. The order of the algorithm that finds a given Boolean function of `n' variables , produces a is
a. constant
b. linear
c. non-linear
d. exponential
12. If n=16, then the value of O(n log n) is
a. 16
b. 32
c. 64
d. 128


13. How many memory management functions are there in C
a. 4
b. 3
c. 2
d. 1
14. Which of the following is not a C memory allocation function
a. alloc( )
b. calloc( )
c. free
d. malloc()
15. If n= 8, then the value of O(1) is
a. 1
b. 2
c. 4
d. 8
16. If n=4, then the value of O(n3) is
a. 4
b. 16
c. 64
d. 512
17. If n=2, then the value of O(n) is
a. 2
b. 3
c. 4
d. 5
18. All memory management functions are found in
a. stdlib.c
b. stdio.h
c. conio.h
d. math.h
19. The function that returns memory to the heap is
a. alloc( )
b. free( )
c. malloc( )
d. realloc( )
20. Which of the following statement about the releasing memory allocation is false?
a. It is an error to dereference a pointer to allocated memory after the memory has been released
b. It is an error to free memory with a pointer to other than the first element of an allocated array
c. Memory should be freed as soon as it is no longer needed
d. To ensure that it is released , allocated memory should be freed before the program
21. The syntax of free() function
a. void free(void* free)
b. int free(void* ptr)
c. float free(void* ptr)
d. void free(ptr)
22. Which of the memory function allocates a block of memory
a. malloc ( )
b. calloc( )
c. release( )
d. free( )
23. Return type of a calloc( ) function is
a. int
b. float
c. char
d. void
24. Return type of a realloc( ) function is
a. int
b. float
c. char
d. void
25. Which of the following memory management function used to release memory
a. malloc( )
b. calloc( )
c. release( )
d. free( )
26. Which of the following is considered auxiliary storage?
a. disk
b. random access memeory(RAM)
c. read only memory(ROM)
d. EEPROM
27. Which of the following is not a standard file stream?
a. stdin
b. stderr
c. stdfile
d. stdout
28. The C library that contains the prototype statements for the file operations is
a. file.h
b. proto.h
c. stdio.h
d. stdlib.h
29. In C, local variable are stored in
a. stack
b. heap
c. permanent storage
d. hard disk
30. The linked list field(s) are
a. data
b. pointer
c. pointer to next node
d. data and pointer to next node
31. The linked list structure is defined as
a. struct node
{
int item;
struct node *next;
};
b. node
{
int item;
struct node *next;
};
c. struct node
{
int item;
node *node;
};
d. node
{
Int item;
node next;
};
32. Dynamic memory area is
a. heap
b. stack
c. permanent storage
d. Hard disk
33. The contents of the storage space allocated dynamically, can be accessed through _ _ _ _ _ _ _
a. structure variables
b. pointers
c. unions
d. arrays
34. Each item in the list contains a �link� to structure containing the _ _ _ _ _ _ _ item
a. previous
b. next
c. present
d. last
35. In C, program instructions are stored in
a. stack
b. heap
c. permanent storage
d. Hard disk
36. In C, Global variables are stored in
a. permanent storage
b. stack
c. heap
d. Hard disk
37. In C, static variables are stored in
a. heap
b. permanent storage
c. Hard disk
d. Stack
38. A list refers to a set of items organized _ _ _ _ _
a. sequentially
b. exponentially
c. non-sequentially
d. factorially
39. Each structure of a linked list consists _ _ _ _ _ _ _ no. of fields
a. 2
b. 3
c. 4
d. 1
40. Linked lists are not suitable for data structures of which one of the following problem?
a. insertion sort
b. Binary search
c. radix sort
d. polynomial manipulation problem
41. An item that is read as input can be either pushed to a stack and latter popped and printed, or printed directly. Which of the following will be the output if the input is the sequence of items-1,2,3,4,5?
a. 3,4,5,1,2
b. 3,4,5,2,1
c. 1,5,2,3,4
d. 5,4,3,1,2
42. No.of pointers to be manipulated in a linked list to delete an item in the middle _ _ _ _ _ _ _
a. Zero
b. One
c. Two
d. Three
43. No.of pointers to be manipulated in a linked list to delete first item
a. Zero
b. One
c. Two
d. Three
44. Stack is useful for _ _ _ _ _ _ _
a. radix sort
b. breadth first search
c. recursion
d. quick sort
45. The end of the list is marked as
a. node.next=0
b. (node.last = 0)
c. node.next= &node;
d. node.previous=0;
46. No.of pointers to be manipulated in a linked list to insert an item in the middle _ _ _ _ _ _ _ _
a. Two
b. Three
c. One
d. Zero
47. No. of pointers to be manipulated in a linked list to delete last item
a. Zero
b. One
c. Two
d. Three
48. Single linked list uses _ _ _ _ _ _ no. of pointers
a. Zero
b. one
c. Two
d. Three
49. LIFO is
a. stack
b. queue
c. linked list
d. tree
50. A stack is has the entries a,b,c,(with a on top). Stack B is empty. An entry popped out of stack A can be printed immediately or pushed to stack B.An entry popped out of stack B can only be printed. In this arrangement, which of the following permutations a,b,c is not possible?
a. b a c
b. b c a
c. c a b
d. a b c
51. Which of the following programming languages features require a stack-base allocation
a. pointer
b. Block-structure
c. recursion
d. dynamic scoping
52. Push down stack or push down list is
a. stack
b. queue
c. linked list
d. dequeue
53. Stack is useful for
a. radix sort
b. breadth first search
c. recursion
d. Heap sort
54. Stacks can not be used to
a. evaluate an arithmetic expression in postfix form
b. implement recursion
c. convert a given arithmetic expression in infix form to is equivalent postfix form
d. allocates resources (like CPU) by the operating system
55. Stack is useful for implementing
a. radix sort
b. breadth first search
c. selection sort
d. depth first search
56. Which of the following is useful in implementing quick sort?
a. stack
b. set
c. list
d. queue
57. Which of the following is essential for converting an infix expression to postfix form efficiently?
a. An operator stack
b. An operand stack
c. An operator stack and an operand stack
d. A parse tree
58. A stack is most suitable to evaluate _ _ _ _ _ expression
a. postfix
b. prefix
c. infix
d. post & infix
59. Linear linked data structure is
a. tree
b. graph
c. stack
d. binary tree
60. A queue of characters currently contained a,b,c,d. What would be the contents of queue after the following operationDELETE, ADD W, ADD X, DELETE, ADD Y
a. A,B,C,W,Y
b. C,D,W,X,Y
c. W,Y,X,C,D
d. A,B,C,D,W
61. Which of the following data structure is suitable for priority queue?
a. Doubly linked list
b. Circular queues
c. Binary search
d. Heaps
62. For storing the sorted data on which often insert and deletion operations are performed, the following data structure is better
a. Array
b. queue
c. linked-list
d. doubly linked-list
63. A circular queue of size N will sign queue full when the number of elements in the queue is
a. N-1
b. N
c. N+1
d. N-2
64. The postfix equivalent of the prefix * + a b - c d is
a. ab + cd - *
b. ab cd + - *
c. ab + cd * -
d. ab + - cd *
65. The postfix expression for the infix expressionA + B* (C+D) / F + D*E is:
a. AB + CD + F / D + E*
b. ABCD + * F / + DE*
c. A*B + CD / F*DE ++
d. A+ BCD / F* DE ++
66. A telephone system which places cells to a particular number on hold can best represented by
a. queue
b. stack
c. linked-list
d. variable
67. The performance of an algorithm is specified by the following notation that represents the worst case
a. O-notation
b. Omega notation
c. Theta notation
d. alpha-notation
68. If front=rear ,then the queue is
a. full
b. empty
c. unknown value
d. 1/2 full
69. Reverse polish expression is
a. Infix
b. postfix
c. prefix
d. post & prefix
70. A list of integers is read in, one at a time, and a binary search tree is constructed. Next the tree is traversed and the integers are printed. Which traversed would result in a printout which duplicates the original order of the list of integers?
a. pre-order
b. post-order
c. in-order
d. in-fix order
71. The postfix expression for the infix expression A + B* (C+D) / F + D*E is
a. AB + CD + * F/D + E *
b. ABCD + *F / + DE* +
c. A*B + CD / F*DE ++
d. A + *BCD / F*DE ++
72. The equivalent of (a+b↑c↑d)*(e+f/d) in the post fix notation is
a. ab+c↑d↑e &fd/
b. abcd↑+↑efd/+*
c. abcdefd/+*↑↑+
d. abcd↑↑+efd/+*
73. The infix form of the postfix expression ABC-/D*E+ is
a. A/B-C*D+E
b. A-B/C*D+E
c. (A-B)/C*D+E
d. A/(B-C)*D+E
74. The postfix expression for the infix expression A/B*C+D*E is
a. AB/C*DE*+
b. ABC/*DE+*
c. ABCD/*E+*
d. ABC*/D*E+
75. The prefix expression for the infix expressionA/B*C+D*E is
a. AB/C*DE*+
b. +*/ABC*DE
c. +*AB/C*DE
d. /+ABCDE
76. Suffix expression is
a. Infix
b. postfix
c. prefix
d. post & prefix
77. polish expression is
a. infix
b. postfix
c. prefix
d. post & prefix
78. To convert an Infix expression into postfix we require
a. stack
b. queue
c. linked list
d. dequeue
79. A stack is most suitable to evaluate _ _ _ _ _ _ _ expression
a. postfix
b. prefix
c. infix
d. post & infix
80. The circular linked list have
a. no beginning
b. no ending
c. beginning but no ending
d. no beginning and no ending
81. To insert a node at the beginning of the doubly linked list _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
82. Doubly linked list uses _ _ _ _ _ _ _ _ no.of pointers
a. Zero
b. One
c. Two
d. Three
83. To insert a node at the beginning of the single linked list _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 0
84. To insert a node at middle of the single linked list _ _ _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
85. To insert a node at the end of the doubly linked list _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
86. To insert a node at the end of the single linked list _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
87. To delete the first node in single linked list _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
88. To delete the last node in single linked list _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 0
89. To delete the middle node in single linked list _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
90. To delete an item in the middle of a circular doubly linked list, _ _ _ _ _ _ _ _ no.of points to be manipulated
a. 2
b. 4
c. 6
d. 8
91. If storage class is missing in the array definition, by default it will be taken to be
a. automatic
b. external
c. static
d. either automatic or external depending on the place of occurrence
92. To delete the last node in doubly linked list _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
93. To delete the middle node in doubly linked list _ _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
94. To insert an item in a circular doubly linked list, _ _ _ _ _ _ _ no.of points to be manipulated
a. 1
b. 2
c. 3
d. 4
95. Which of the following features of C is meant to provide reliable access to special memory
a. static _ const
b. pragma
c. volatile
d. immutable
96. To insert a node at middle of the doubly linked list _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
97. To delete the first node in doubly linked list _ _ _ _ _ _ _ _ no. of pointers to be manipulated
a. 1
b. 2
c. 3
d. 4
98. To insert an item in a circular single linked list _ _ _ _ _ _ _ _ _ no.of points to be manipulated
a. 2
b. 3
c. 4
d. 1
99. To delete an item in a circular doubly linked list, _ _ _ _ _ _ _ _ no.of points to be manipulated
a. 1
b. 2
c. 3
d. 4
100. A sorting technique is called stable if:
a. it takes O ( n log n) time
b. It maintains the relative order of occurrence of non-distinct elements
c. It uses divide and conquer paradigm
d. The maximum number of nodes in a binary tree of height h is (2 -1)(The height of the root is reckoned as 0)
101. The maximum number of comparisons needed to sort 7 items using radix sort is (assume each item is a 4 digit decimal number)
a. 280
b. 40
c. 47
d. 38
102. If each node in a tree has a value greater than every value in its left sub tree and has value less than every value in its right sub tree, the binary tree is known as
a. Complete binary tree
b. Full binary tree
c. Binary search tree
d. Threaded binary tree
103. A binary tree in which if all its levels except possibly the last, have the maximum number of nodes and all the nodes at the last level appear as far as possible, is known as
a. full binary tree
b. 2-tree
c. threaded tree
d. Complete binary tree
104. You are asked 15 randomly generated numbers. You should prefer
a. bubble sort
b. quick sort
c. merge sort
d. heap sort
105. Which data structure is needed to convert infix notation to post fix notation
a. B-tee
b. Queue
c. Tree
d. Stack
106. The time required to search an element in a binary search tree having n elements is
a. O(1)
b. O(log2 n)
c. O(n)
d. O(n log2 n)
107. A binary tree T has n leaf nodes. The number of nodes of degree 2 in T is
a. log2 n
b. n-1
c. n
d. 2n
108. A tree, for which at every node the height of its left sub tree and right sub tree differ at most by one is a/an
a. Binary search tree
b. AVL tree
c. Complete binary tree
d. Threaded binary tree
109. Which of the following sorting algorithms does not have a worst case running time complexity of O(n2)?
a. Insertion sort
b. Merge sort
c. Quick sort
d. Bubble sort
110. Which of the following is not a correct statement
a. internal sorting is used if the number of items to be sorted is very large
b. External sorting is used if the number of items to be sorted is very large
c. External sorting needs auxiliary storage
d. Internal sorting needs auxiliary storage
111. There are 4 different algorithms A1,A2,A3,A4 to solve a given problem with the order log(n),log(log(n)),nlog(n),n/log(n) respectively. Which is the best algorithm?
a. A1
b. A2
c. A3
d. A4
112. Which of the following algorithms exhibits the unusual behavior that, minimum numbers of comparisons are needed if the list to be sorted is in the reverse order and maximum numbers of comparisons are needed if they are already in sorted order?
a. Heap tree
b. Radix sort
c. Binary insertion sort
d. Selection sort
113. You want to check whether a given set of items is sorted. Which of the following sorting methods will be the most efficient if it is already in sorted order?
a. bubble sort
b. selection sort
c. insertion sort
d. merge sort
114. The way a card game player arranges his cards as he picks them up one by one , is an example of
a. bubble sort
b. selection sort
c. insertion sort
d. merge sort
115. Which of the following sorting algorithm has the worst time complexity of nlog(n)?
a. Heap sort
b. Quick sort
c. Insertion sort
d. Selection sort
116. Which of the following sorting methods sorts a given set of items that is already in sorted order or in reverse sorted order with equal speed?
a. Heap sort
b. Quick sort
c. Insertion sort
d. Selection sort
117. Which of the following sorting methods will be the best if number of swapping done, is the only measure of efficiency?
a. bubble sort
b. insertion sort
c. selection sort
d. heap sort
118. As part of the maintenance work, you are entrusted with the work of rearranging the library books in a shelf in proper order, at the end of each day. The ideal choice will be
a. bubble sort
b. insertion sort
c. selection sort
d. heap sort
119. Sorting is not useful for
a. report generation
b. minimizing the storage needed
c. making searching easier and efficient
d. responding to queries easily
120. A machine took 200 sec to sort 200 names, using bubble sort. In 800 sec. it can approximately sort _ _ _ _ _ _ _ _ _ names
a. 400
b. 800
c. 750
d. 1600
121. A machine needs a minimum of 100 sec. to sort 1000 names by quick sort. The minimum time needed to sort 100 names will be approximately
a. 50.2 sec
b. 6.7 sec
c. 72.7 sec.
d. 11.2 sec.
122. A sorting method with _ _ _ _ _ _ _ _ is the most efficient method
a. O(log n)
b. O(n)
c. O(1)
d. O(n2)
123. Which of the following statement is false?
a. Optimal binary search construction can be performed efficiently using dynamic programming
b. Breadth-first search cannot be used to find connected components of a graph
c. Given the prefix and postfix walks of a binary tree, the binary cannot be uniquely reconstructed
d. Depth-first search can be used to find the connected components of a graph
124. The average successful search time for sequential search on 'n' items is
a. n/2
b. (n-1)/2
c. (n+1)/2
d. log(n)+1
125. A hash function f defined as f(key)=key mod 7, with linear probing, is used to insert the keys 37,38,72,48,98,1,56, into a table indexed from 0 to 6. What will be the location of key 11?
a. 3
b. 4
c. 5
d. 6
126. The order of the binary search algorithm is
a. n
b. n2
c. nlog(n)
d. log(n)
127. Linked lists are not suitable for implementing
a. insertion sort
b. binary search
c. radix sort
d. polynomial manipulation
128. Stack is useful for
a. radix sort
b. breadth first search
c. heap sort
d. depth first search
129. Which of the following algorithm design technique is used in the quick sort algorithm?
a. Dynamic programming
b. Backtracking
c. Divide and conquer
d. Greedy method
130. The average successful search time taken by binary search on a sorted order array of 10 items is
a. 2.6
b. 2.7
c. 2.8
d. 2.9
131. A 3-ary tree in which every internal node has exactly 3 children. The number of leaf nodes in such a tree with 6 internal nodes will be
a. 10
b. 17
c. 23
d. 13
132. Which of the following traversal techniques lists the nodes of a binary search tree in ascending order?
a. post-order
b. In-order
c. Pre-order
d. No-order
133. A general linear list is a list in which operations, such as retrievals, insertions, changes, and deletions can be done _ _ _ _ _ _ _ _ _
a. any where in the list
b. only at the beginning
c. only at the end
d. only at the middle
134. A(n) _ _ _ _ _ _ _ is a collection of elements and relationship Among them.
a. abstract data type
b. array
c. data structure
d. standard type
135. Data that consists of a single, non decomposable entity are known as _ _ _ _ _ _
a. atomic data
b. array
c. data structure
d. standard type
136. A binary tree has n leaf nodes. The number of nodes of degree 2 in this tree is
a. logn
b. n-1
c. n
d. 2n
137. A full binary tree with n leaf nodes contains
a. n nodes
b. log2 n nodes
c. 2n-1 nodes
d. 2n nodes
138. The number of binary trees with 3 nodes which when traversed in post-order gives the sequence A,B,C is
a. 3
b. 9
c. 7
d. 5
139. Which of the following need not be a binary tree?
a. Search tree
b. Heap
c. AVL-tree
d. B-tree
140. A binary tree in which every non-leaf node has non-empty left and right subtrees is called a strictly binary tree.Such a tree with 10 leaves
a. cannot be more than 19 nodes
b. has exactly 19 nodes
c. has exactly 17 nodes
d. can not have more than 17 nodes
141. Find the odd man out
a. binary tree
b. Avl tree
c. graph
d. queue
142. The depth of a complete binary tree with n nodes(log is to the base two)
a. log(n+1)-1
b. log(n)
c. log(n+1)+1
d. log(n)+1
143. The following is an example of a non-linear data structure
a. stack
b. queue
c. tree
d. linear list
144. If a graph is represented as a linked list, _ _ _ _ _ _ _ _ _ no.of list nodes are required
a. 1
b. 2
c. 3
d. 4
145. The number of possible binary trees with 4 nodes is
a. 12
b. 14
c. 13
d. 15
146. The number of possible binary trees with 3 nodes is
a. 12
b. 13
c. 5
d. 15
147. The number of possible ordered trees with 3 nodes A,B,C is
a. 16
b. 12
c. 6
d. 10
148. A tree is a _ _ _ _ _ data structure
a. non-recursive
b. recursive
c. linear
d. non-linear
149. A node that does not have any sub-tree is called a _ _ _ _ _ _ _
a. terminal node
b. root node
c. left node
d. right node
150. The number of edges in a regular graph of degree d and n vertices is
a. maximum of n, d
b. n+d
c. nd
d. nd/2
151. Which of the following algorithms solves the all pair shortest path problem?
a. Diskstra's algorithm
b. Floyd algorithm
c. Prim's algorithm
d. Warshall's algorithm
152. The minimum number of colors required to color a graph having n (n>3) vertices and 2 edges is
a. 4
b. 3
c. 2
d. 1
153. The maximum degree of any vertex in a simple graph with n vertices is
a. n
b. n-1
c. n+1
d. 2n-1
154. A graph G with n nodes is bipartite if it contains
a. n edges
b. a cycle of odd length
c. no cycle of odd length
d. n2 edges
155. A graph can be represented as an _ _ _ _ _ _
a. Linked list
b. Structure
c. Union
d. Queue
156. A graph can be represented as an _ _ _ _ _ _
a. Array
b. Structure
c. Union
d. Queue
157. The minimum number of edges in a connected cyclic on n vertices is
a. n-1
b. n
c. n+1
d. n+2
158. Which of he following is useful in traversing a given graph by breadth first search?
a. Stack
b. Set
c. List
d. Queue
159. Sparse matrices have
a. many zero entries
b. many non-zero entries
c. higher dimensions
d. lower dimensions
160. The maximum no.of edges in an undirected graph with out loops with n vertices is
a. n
b. n*(n-1)
c. n*(n-1)/2
d. n-1
161. Which of the following abstract data types can be used to represent a many to many relationship
a. tree
b. graph
c. queue
d. stack
162. In a directed graph without self loops with n verices , the maximum no.of edges is
a. n
b. n*(n-1)
c. n*(n-1)/2
d. n-1
163. An n vertex undirected graph with exactly n*(n-1)/2 edges is said to be
a. Complete graph
b. Un complete graph
c. Directed graph
d. Un directed graph
164. To create a node dynamically in a singly linked list _ _ function in C is used
a. malloc()
b. calloc()
c. alloc()
d. dealloc()
165. In an undirected graph, the sum of degrees of all the nodes
a. must be even
b. is thrice the number of edges
c. must be odd
d. need not be even
166. In an undirected graph, the sum of degrees of all the nodes
a. is thrice the number of edges
b. is twice the number of edges
c. must be odd
d. need not be even
167. _ _ _ function is used to in C to dynamically allocate space for more than one object
a. malloc()
b. calloc()
c. alloc()
d. dealloc()
168. _ _ _ function is used to in C to dynamically allocate space for one object
a. malloc()
b. calloc()
c. alloc()
d. dealloc()
169. If n=2, then the value of O(n log n) is
a. 2
b. 4
c. 8
d. 16
170. Calloc(m,n); is equivalent to
a. malloc(m*n,0);
b. memset(0,m*n);
c. ptr=malloc(m*n);memset(p,0,m*n)
d. ptr=malloc(m*n);strcpy(p,0)
171. If the sequence of operations push(1),push(2) ,pop, push(1),push(2),pop, pop, pop, push(2),pop, are performed on a stack, the sequence of popped out values are
a. 2,2,1,1,2
b. 2,2,1,2,2
c. 2,1,2,2,1
d. 2,1,2,2,2
172. return type of a realloc( ) function is
a. int
b. float
c. char
d. void
173. To delete an element from a queue we use the _ _ _ _ _ operation
a. pop
b. push
c. enqueue
d. dequeue
174. To add an element to a queue we use the _ _ _ _ _ operation
a. pop
b. push
c. enqueue
d. dequeue
175. Which of the memory function allocates a contiguous memory
a. malloc( )
b. calloc( )
c. release( )
d. free( )
176. Return type of a malloc( ) function is
a. int
b. float
c. char
d. void
177. A queue is a _ _ _ _ _ _ structure
a. first in-last out
b. lasting-first-out
c. first in-first out
d. last in-last out
178. A queue is a list in which insertion can be done _ _ _ _
a. any where in the list
b. only at the beginning
c. only at the end
d. only at the middle
179. A _ _ _ _ _ _ is a first in - last out(FIFO) data structure in which insertions are restricted to one end, called the rear, and deletions are restricted to another end ,called the front
a. Stack
b. queue
c. tree
d. binary tree
180. The pointer(s) in a queue points to
a. start of the queue
b. end of the queue
c. middle of the queue
d. both start and end of the queue
181. The disadvantage of the queue is
a. when the item is deleted, the space for that item is not claimed
b. when the item is deleted, the space for that item is claimed
c. a non destructive
d. increases the memory space
182. A queue is a list in which deletion can be done _ _ _ _
a. any where in the list
b. only at the beginning
c. only at the end
d. only at the middle
183. Read() operation in queue is
a. non-destructive
b. additive
c. push()
d. destructive
184. In which of the data structure, space for the item is not claimed ,when an item is deleted
a. queue
b. circular queue
c. stack
d. linked list
185. As the items from a queue get deleted, the space for item is not reclaimed in queue. This problem is solved by
a. circular queue
b. stack
c. linked list
d. doubly linked list
186. Which of the following operation is used to add an item in a queue
a. write()
b. read()
c. pop()
d. push()
187. _ _ _ _ no.of pointers are required to implement read and write operations in a queue
a. two
b. three
c. four
d. five
188. FIFO is
a. stack
b. queue
c. linked list
d. tree
189. Which of the following operation is used to an item in a queue
a. write()
b. read()
c. pop()
d. push()
190. The number of swapping needed to sort the numbers 8,22,7,9,31,19,5,13 in an ascending order, using bubble sort is
a. 11
b. 12
c. 13
d. 14
191. Given two sorted list of size 'm' and 'n' respectively. The number of comparisons needed by the merge sort algorithm will be
a. m x n
b. maximum of m,n
c. minimum of m,n
d. m+n-1
192. For merging two sorted lists of sizes m and n into a sorted list of size m+n, requires _ _ _ _ _ _ _ _ no.of comparisons
a. O(m)
b. O(n)
c. O(m+n)
d. O(log(m)+log(n))
193. The principle of locality justifies the use of
a. interrupts
b. DMA
c. polling
d. cache memory
194. The concatenation of two lists is to be performed in O(1) time. Which of the following implementations of a list could be used?
a. Singly linked list
b. Doubly linked list
c. Circularly doubly linked list
d. Array implementation of list
195. The initial condition of a queue is
a. front=rear=-1
b. front=rear
c. front=rear=n
d. front=rear=1
196. A sorting technique that guarantees , that records with the same primary key occurs in the same order in the sorted list as in the original unsorted list is said to be
a. stable
b. consistent
c. external
d. linear
197. The average number of comparisons performed by the merge sort algorithm , in merging two sorted lists of length 2 is
a. 8/3
b. 8/5
c. 11/7
d. 1/16
198. Merge sort uses
a. divide and conquer strategy
b. backtracking approach
c. heuristic approach
d. greedy approach
199. Queue can be used to implement
a. radix sort
b. quick sort
c. recursion
d. depth first search





EXAMAPERS123.BLOGSPOT.COM

Basic structure of c- programmme



Basic structure of c- programmme



Basic structure of c- programmme

Documentation section(comment section)
Link section
Define section
Global declaration section
Main( ) ------- main function section
{

Declaration part
Executable part
}

Sub program section
Function 1
Function 2
Function 3 [ user defined functions]
--- ---- ----- ----
--- -------- -----
Function n

C program can be viewed as a group of building blocks called functions. A function is a sub routine that may include one or more statements designed to perform a specific task. To write a C program first creates function and then put them together .C program may contain one or more sections.

Documentation section :
Documentation section consists of a set of comment lines giving the name of the program, author, date and other details.

Line section :
Line section provides the instructions to the compiler to link functions from the system library.
Definition section:
Defines all symbolic constants.

Global declaration section:
There are some variables that are used in more than one function such variables are called global variables and are declared in global declaration i.e., outside of all the functions. The variables which is defined in function is called local variable(private variable).

Main function:
Declaration part
Executable part (input, arithmetic, output, control statements etc)

Sub program section :
This section contains all the user defined functions that are called in the main function. User defined functions are generally placed immediately after the main function although they may appear in any order.

Rules ;

1) each and every instruction in a C program is written in a separate statement, A complete C programme comprises a series of statement. The statements must appear in same order in which we wish them to be executed.
2) Usually all statements are entered in small case letters.
3) C has no specific rules for the position at which the statement is to be written that’s why it is often called free form language.
4) Every statement should always end with a semi colon.

Relational operators : 21/9/07
Symbol Associativity Description
> Left to right Is greater than
>= Left to right Is greater than or equal
< Left to right Is less than
<= Left to right Is less than or equal
== Is equal to
!= or <> Not equal

Note : the value of a relational expression is either non-zero
value (1) or zero value.
It is one if the specified relation is true and zero if specified relation is false.

Logical operators :

Symbol Associativity Description
&& Left to right Expressions must be satisfied
|| Left to right Any one of expression must be satisfied.
! (exclamatory) Right to left These operators reverse the value of an expression. It makes true expression as false and false expression as true.




Truth table :

Op-1 Op-2 Value of the
op1&&op2 Value of the
op1 || op2
Non-zero Non-zero Non-zero Non-zero
Non-zero zero zero Non-zero
Zero Non-zero zero Non-zero
Zero zero zero Zero


Note :
Like simple relational expressions, A logical expressions yields a value of one or zero.

6) Bit wise operators :
‘C’ as a distinction of supporting a special operator
known as operators. For manipulation of data at bit-level. These operators are used for testing the bits or shifting them from right to left.
Note: Bit wise operators maynot be applied to float or double.

Operator Meaning
&(amposand) Bitwise and
|(pipe) Bitwise or
^(carot) Bitwise exclusive or
<< Shift left
>> Shift right
Special operators :

Special operators are comma(,) operator,size of ( ) operator.
Syntax: sizeof (datatype);
Pointer operators are & and * and
Member selection operators -> and .
Example of comma operator :
X=(a-5,b=6,a+b);

Note : ANSI commity has introduce two pre processor operators string as “string zing” and “token pasting”
The operators are # and ## .

Control structures in C :-

Decision control structure or Bi directional conditional structures :-
There are three types of IF statements
1) simple if (simple conditional statements)
2) multiple if or if ladder (multiple conditional statements)
3) nested if (nested conditional statements)
C uses the keyword ‘IF’ to implement the decision control instruction the keyword if tells the compiler that what follows is the decision control instruction.
The condition following the keyword is always enclosed with a pair of parenthesis . if the condition is true the statement is executed and if the condition is false then the statement is not executed instead the program skips.

I syntax :-
If ()
{
Statements ;
}
Statements x;
Or
If ()
Statements ;
II syntax :
If ()
Statements ;
Else
Statements;
Statements x;
III syntax :
If()
{
Statements ;
}
Else
{
Statements ;
}
Statements x;


The if statement by itself will execute a single statement or a group of statements.
1) The group of statements of the if and not included else is called if block statement.
2) Similarly the statement after the else called else block statement.


Accept any two integer values and find whether the two integers are equal or not . ?

Program :

/* Equal of two numbers created on 22-09/07 */

#include
#include
main()
{
int x=0,y=0;
clrscr();
printf("enter x integer value :");
scanf("%d",&x);
printf("enter y integer value:");
scanf("%d",&y);
if(x==y)
printf("Both numbers are equal ");
else
printf("not equal");
getch();
return 0;
}


Accept any two integers and find the biggest value ? 22/9/07

Program :

/* Biggest of two numbers created on 22-09/07 */

#include
#include
main()
{
int x=0,y=0;
clrscr();
printf("enter x integer value :");
scanf("%d",&x);
printf("enter y integer value:");
scanf("%d",&y);
if(x>y)
printf("X is biggest ");
else
printf("Y is biggest");
getch();
return 0;
}

Case studies :
1) Accept any integer value and find whether the given integer is positive or negative.
program

/* To find the integer value is positive or negative */

#include
#include
main()
{
int a=0;
clrscr();
printf("enter integer value ");
scanf("%d",&a);
if(a>0)
printf("the integer value %d is positive ");
else
printf("the integer value %d is negative");
getch();
return 0;
}

2) Accept any integer value and find whether the given integer is even or odd?
Program :
/* To find the integer value is even or odd created on 23-09-07 */

#include
#include
main()
{
int x=0;
clrscr();
printf("enter integer value ");
scanf("%d",&x);
if(x%2==0)
printf("the integer value %d is an even number");
else
printf("the integer value %d is an odd number");
getch();
return 0;
}


3) Accept any alphabatical value and find whether the given alphabet is vowel or consonant ?
Program 24/09/07

/* To find the given character is vowel or consonant created on 24/9/07*/

#include
#include
main()
{
char x;
clrscr();
printf("enter a character\t");
scanf("%c",&x);
if(toupper(x)=='A'||toupper(x)=='E'||toupper(x)=='I'||toupper(x)=='O'||toupper(x)=='U')

[( or ) /* if(x=='a'||x=='A'||x=='e'||x=='E'||x=='i'||x=='I'||x=='u'||x=='U'||x=='o'||x=='O')]
*/ printf("it is a vowel");
else
printf("it is a consonant");
getch();
}
Program : 24/9/07
/* To find the character which is in upper case or lower case by using three methods */
/* created on 24/09/07 */

#include
#include
main()
{
char n;
clrscr();
printf("enter the character:\t");
scanf("%c",&n);
if(isupper(n))
printf("the given character is upper case letter");
else
printf("the given character is lower case letter");
/* SECOND METHOD

if(n>=65&&n<=90)
printf("the given character is uppercase letter");
else
printf("the given character is lowercase letter ");

THIRD METHOD

if(n>='A'&&n<='Z')
printf("the given character %c is uppercase letter");
else
printf("the given characte %c is lowercase letter");*/
getch();
return 0;
}



2) Multiple if(the if else ladder) :-
Syntax :
If()
{
Statements ;
}
Else if()
{
Statements;
}
Else if()
{
Statements;
}
- - - -- - - - - - - - - - -
- - - - - - - - - - - -
- - - - - - - - -- - - -
- ------ - - - - - - - -

Else
{
Statements;
}
Statements x;

There is another way putting if’s together when multiple path decisions are involved. A multiple path decision is a chain of if’s , if the statement associated with each else is as on if.
Program :
/* ABC sales data created on 24/09/07 */

#include
#include
main()
{
int ino=0;
unsigned int qty=0;
float r=0,dp=0.00,amt=0,damt=0,net=0;
/* dp=discount percent,damt=discount amount */
char iname[15];
clrscr();
printf("enter the item number :");
scanf("%d",&ino);
printf("enter the item name :");
scanf("%s",&iname);
printf("enter the quantity of particular item : ");
scanf("%u",&qty);
printf("enter the rate of the item :");
scanf("%f",&r);
amt=qty*r;
if(amt>=20000)
dp=0.1;
else if(amt>=10000&&amt<20000 br=""> dp=0.07;
else if(amt>=5000&&amt<10000 br=""> dp=0.05;
else
dp=0.02;
damt=amt*dp;

/* {
damt=(amt*0.1);
printf("the discount of the item is gives 10percent %f" ,damt);
}
else if(amt>=10000 && amt<20000 br=""> {
damt=(amt*0.07);
printf("the discount of the item gives 7percent is %f",damt);
}
else if(amt>=5000&&amt<10000 br=""> {
damt=(amt*0.05);
printf("the discount of the item gives is 5percent is %f",damt);
}
else
{
damt=(amt*0.02);
printf("the discount of the item is gives 2percent is %f",damt);
} */
net=amt-damt;
printf("\nthe total amount is %f\n the discount received is %f \ndiscount percent is%f \n total net is %f",amt,damt,dp,net);
getch();
return 0;
}


Gotoxy(C,R) : 25/9/7
This function will help us to keep the cursor at a desired location with in the limits of text mode x-columns, y-rows(80 columns,25rows
Program :
/* To calculate the electricity bill created on 25-9-07 */

#include
#include
main()
{
int cr=0,pr=0,nou=0;
float R=0,bill=0;
char mno[5],cname[20];
clrscr();
gotoxy(32,4);
printf("ELECTRICITY BILL ");
gotoxy(60,4);
printf("DATE :25/09/2007");
gotoxy(5,5);
printf("*******************************************************************");
gotoxy(5,7);
printf("Enter meter no :");
gotoxy(50,7);
printf("Enter Cname :");
gotoxy(5,8);
printf("********************************************************************");
gotoxy(32,9);
printf("Enter CR :");
gotoxy(32,11);
printf("Enter PR :");
gotoxy(5,13);
printf("********************************************************************");
gotoxy(5,15);
printf("No.of.Units :");
gotoxy(32,15);
printf("Rate :");
gotoxy(50,15);
printf("Bill :");
gotoxy(5,17);
printf("*********************************************************************");
gotoxy(32,19);
printf("THANK Q");
gotoxy(22,7);
scanf("%s",mno);
gotoxy(64,7);
scanf("%s",cname);
gotoxy(42,9);
scanf("%d",&cr);
gotoxy(42,11);
scanf("%d",&pr);
nou=cr-pr;
if(nou>=800)
{
R=5.50;
}
else if(nou>=600&&nou<800 br=""> {
R=3.50;
}
else if(nou>=300&&nou<600 br=""> {
R=2.50;
}
else if(nou>=100&&nou<300 br=""> {
R=1.50;
}
else
{
R=1.00;
}
bill=nou*R;
gotoxy(24,15);
printf("%d",nou);
gotoxy(44,15);
printf("%f",R);
gotoxy(60,15);
printf("%f",bill);
getch();
return 0;
}





EXAMAPERS123.BLOGSPOT.COM