Sunday, July 25, 2010

NOVELL COMPANY EXAM PAPERS





NOVEL COMPANY EXAMPAPERS




1). A beggr collects cigarette stubs and makes one ful cigarette
with every 7 stubs. Once he gets 49 stubs . How many cigarettes
can he smoke totally.
Ans. 8

2). A soldiar looses his way in a thick jungle at random walks
from his camp but mathematically in an interestingg fashion.
First he walks one mile east then half mile to north. Then 1/4
mile to west, then 1/8 mile to south and so on making a loop.
Finally hoe far he is from his camp and in which direction.
ans: in north and south directions
1/2 - 1/8 + 1/32 - 1/128 + 1/512 - and so on
= 1/2/((1-(-1/4))
similarly in east and west directions
1- 1/4 + 1/16 - 1/64 + 1/256 - and so on
= 1/(( 1- ( - 1/4))
add both the answers

3). hoe 1000000000 can be written as a product of two factors
neither of them containing zeros
Ans 2 power 9 x 5 ppower 9 ( check the answer )

4). Conversation between two mathematcians:
first : I have three childern. Thew pproduct of their ages is 36
. If you sum their ages . it is exactly same as my neighbour's
door number on my left. The sacond mathematiciaan verfies the
door number and says that the not sufficient . Then the first
says " o.k one more clue is that my youngest is the youngest"
Immmediately the second mathematician answers . Can you aanswer
the questoion asked by the first mathematician?
What are the childeren ages? ans 2 and 3 and 6

5). Light glows for every 13 seconds . How many times did it
between 1:57:58 and 3:20:47 am
ans : 383 + 1 = 384

6). 500 men are arranged in an array of 10 rows and 50 columns .
ALL tallest among each row aare asked to fall out . And the
shortest among THEM is A. Similarly after resuming that to their
originaal podsitions that the shorteest among each column are
asked to fall out. And the longest among them is B . Now who is
taller among A and B ?
ans A

7). A person spending out 1/3 for cloths , 1/5 of the remsaining
for food and 1/4 of the remaining for travelles is left with
Rs 100/- . How he had in the begining ?
ans RS 250/-

8). there are six boxes containing 5 , 7 , 14 , 16 , 18 , 29
balls of either red or blue in colour. Some boxes contain only
red balls and others contain only blue . One sales man sold one
box out of them and then he says " I have the same number of red
balls left out as that of blue ". Which box is the one he solds
out ?
Ans : total no of balls = 89 and (89-29 /2 = 60/2 = 30
and also 14 + 16 = 5 + 7 + 18 = 30

9). A chain is broken into three pieces of equal lenths
conttaining 3 links each. It is taken to a backsmith to join into
a single continuous one . How many links are to tobe opened to
make it ?
Ans : 2.

10). Grass in lawn grows equally thickand in a uniform rate. It
takes 24 days for 70 cows and 60 for 30 cows . How many cows can
eat away the same in 96 days.?
Ans : 18 or 19

11). There is a certain four digit number whose fourth digit is
twise the first digit.
Third digit is three more than second digit.
Sum of the first and fourth digits twise the third number.
What was that number ?
Ans : 2034 and 4368

If you qualify in the first part then you have to appear for
the second i.e the following part.
Part 2.

1. From a vessel on the first day, 1/3rd of the liquid
evaporates. On the second day 3/4th of the remaining liquid
evaporates. what fraction of the volume is present at the end of
the II day.

2. an orange galss has orange juice. and white glass has apple
juice. Bothe equal volume 50ml of the orange juice is taken and
poured into the apple juice. 50ml from the white glass is poured
into the orange glass. Of the two quantities, the amount of
apple juice in the orange glass and the amount of orange juice in
the white glass, which one is greater and by how much?

3. there is a 4 inch cube painted on all sides. this is cut
into no of 1 inch cubes. what is the no of cubes which have no
pointed sides.

4. sam and mala have a conversation. sam says i am vertainly not
over 40. mala says i am 38 and you are atleast 5 years older
than me. Now sam says you are atleast 39. all the sattements by
the two are false. How hold are they realy.

5. ram singh goes to his office in the city, every day from his
suburbun house. his driver mangaram drops him at the railway
station in the morning and picks him up in the evening. Every
evening ram singh reaches the station at 5 o'clock. mangaram
also reaches at the same time. one day ramsingh started early
from his office and came to the station at 4 o'clock. not
wanting to wait for the car he starts walking home. Mangaram
starts at normal time, picks him up on the way and takes him back
house, half an hour early. how much time did ram singh walk.

6. in a railway station, there are tow trains going. One in the
harbour line and one in the main line, each having a frequency of
10 minutes. the main line service starts at 5 o'clock. the
harbour line starts at 5.02a.m. a man goes to the station every
day to catch the first train. what is the probability of man
catchinhg the first train

7. some people went for vaction. unfortunately it rained for 13
days when they were there. but whenever it rained in the
morning, they had clean afternood and vice versa. In all they
enjoyed 11 morning and 12 afternoons. how many days did they
stay there totally

8. exalator problem repeat

9. a survey was taken among 100 people to firn their preference
of watching t.v. programmes. there are 3 channels. given no of
people who watch
at least channel 1
" " 2
" " 3
no channels at all
atleast channels 1and 3
" " 1 and 2
" " 2 and 3
find the no of people who watched all three.

10. albert and fernandes they have two leg swimming race. both
start from opposite and of the pool. On the first leg, the boys
pass each other at 18 mt from the deep end of the pool. during
the II leg they pass at 10 mt from the shallow end of the pool.
Both go at const speed. but one of them is faster. each boy
rests for 4 sec to see at the end of the i leg. what is the
length of the pool.

11. T H I S Each alphabet stands for one
I S digit, what is the maximum value T
-------------- can take
X F X X
X X U X
--------------
X X N X X
--------------

THE BASIC LANGUAGE




EXAMPAPERS123.BLOGSPOT.COM







CHAPTER-1

THE BASIC LANGUAGE
 object oriented programming.
 Encapsulation.
 Polymorphism.
 CPP Preprocessor Directives.
 Comments.
 CPP Data types
 New and Delete expressions
 Type convertions.

Object oriented programming: It is nothing but doing the programs with the help of objects. So first of all we have to know what is an object? How it is implemented in C++ programming? All these details are given below.
CLASS: A class is an expanded concept of a data structure: instead of holding only data, it can hold both data and functions.
OBJECT: An object is an instantiation of a class. In terms of variables, a class would be the type, and an object would be the variable.
By using these objects we can access the class members and member functions.

Classes are generally declared using the keyword class, with the following format:

class class_name { EG: Class student
access_specifier_1: {
member 1; charname;
access_specifier_2: int marks:
member2; float average;
… }s;
}object_names;
In the above eg: “S” is the object so we can access entire data of the student class by using this object. We discuss about the access specifiers in next concepts in detail.
Encapsulation: Wrapping up of a data in to single logical unit (i.e class) is called encapsulation. So writing class is known as encapsulation.
Polymorphism: Simply it one thing different actions let me explain consider one person (assume that person is one thing)he will exibit different actions depending on situations like his son called him daddy, and his father called him son, and his wife called him husband. I will explain how this concept is implemented in our C++ concepts in the upcoming chapters.

Inheritance: A key feature of C++ classes is inheritance. Inheritance allows creating classes which are derived from other classes, so that they automatically include some of its “parent’s” members, plus its own. We will the implementation of this concept in detail in the upcoming chapters.

Comments: comments are parts of the source code disregarded by the compiler. They simply do nothing. Their purpose is only to allow the programmer to insert notes or descriptions embedded within the source code.

C++ supports two ways to insert comments:
//line comment
/*block comment*/
The first of them, known as line comment, discards everything from where the pair of slash signs (//) is found up to the end of that same line. The second one, known as block comment, discards everything between the /* characters and the first appearance of the */ characters, with the possibility of including more than one line.

We are going to add comments to our second program:

If you include comments within the source code of your programs with out using the comment characters combinations //,/* or */, the compiler will take them as if they were C++ expressions, most likely causing one or several error messages when you compile it.
Preprocessor directives:
Preprocessor directives are lines included in the code of our programs that are not program statements but directives for the processor. These lines are always preceded by a pound sign (#). The processor executed before the actual compilation of code is generated by the statements.

These processor directives extend only across a single line of code. As soon as a new line character is found, the processor directive is considered to end. No semicolon (;) is expected at the end of a preprocessor directive. The only way a preprocessor directive can extend through more than one line is by preceding the new line character at the end of the line by a backslash (/)
Macro definitions (#define, #undef)
To define preprocessor macros we can use #define. Its formate is:
#define identifier replacement.
When the preprossor encounters this direccive , it replaces any occurrence of identifier in the rest of the code by replacement can be an expresson , a statement , a block or simply anything. The processor does not understand C++ , it simply replaces any occurrence of identifier by replacement.
#define TABLE_SIZE 100
int table 1 [TABLE_SIZE];
int table 2 [TABLE_SIZE];
After the preprocessor has replased TABLE_SIZE, the code becomes equivalent to:
Int table 1[100];
Int table 2 [100];
This use of #define as constant definer is already known by us from previous tutorials, but #define can work also with parameters to define function macros:
#define getmax(a,b) a>b?a:b
This would replace any occurrence of getmax followed by two arguments by the replacement expression, but also replasing each argument by its identifier, exactly as you would expect if it was a function:
//function macro
#include
Using namespace std;

#define getmax(a,b) ((a)>(b)?(a)b))

Int main()
{
Int x=5,y;
Y=getmax (x,2);
Cout << y << endl;
Cout<< getmax(7,x)<Return 0;
} 5
6

Defined macros are not affected by block structure. A macro lasts until it is undefined with the #undef preprosseor directive:
#define table_size 100
Int table 1 [TABLE_SIZE];
#undef TABLE_SIZE
#define TABLE_SIZE 200
Int table 2[TABLE_SIZE];

This would generate the same code as:

Int table 1 [100];
Int table 2 [200];

Function macro definitions accept two special operators (# and ##) in the replacement sequence:
If the operator # is used before a parameter is used in the replacement sequence, that parameter is replaced by a string literal (as if it were enclosed between double quotes)
#define str (x) #x
Cout << str(test);

This would be translated into:
Cout << “test”;

The operator ## concatenates two arguments leaving no blank spaces between them:

#define glue(a,b) a## b
Glue(c,out) << ”test’’;
This would also be translated into:
Cout <<”test”;
Because preprocessor replacements happen before any C++ syntax check, macro definitions can be a tricky feature, but be careful: code that relies heavily on complicated macros may result obscure to other programmers, since the syntax they expect in C++.
Conditional inclusions (#ifdef, #ifdef, #if, #endif, #else and #elif)
These directives allow to include or discard part of the code of a program if a certain condition is met.
#ifdef allowes a section of a program to be compiled only if the macro that is specified as the parameter has been defined, no matter which its value is. For example:
#ifdef TABLE_SIZE
int table [TABLE_SIZE];
#endif
In this case , the line of code int table [TABLE_SIZE]; is only compiled if TABLE_SIZE was previously defined with #define, independently of its value. If it was not defined, that line will not be included in the program compilation.
#ifndef serves for the exact opposite: the code between #indef and #endif directives is only compiled if the specified identifier has not been previously defined. For example:
#ifndef TABLE_SIZE
#define TABLE_SIZE 100
#endif
Int table [TABLE_SIZE];


/* My second program in C++
With more comments*/
#include
Using namespace std;
Int main ()
{
Cout <<”Hello World!”; // prints Hello World!
Count <<”I’m a C++program”; //prints I’m a C++ program
return 0;
} Hellow World! I’m a C++ program

Friday, May 28, 2010

DATA MINING AND DATA WARE HOUSE

DATA MINING AND DATA WARE HOUSE
Many software projects are accumulated by a great deal of data, so we really need information about the effective maintenance and reteving of data from the database. The newest, hottest technology to address these concerns is data mining and data warehousing.
Data Mining is the process of automated extraction of predictive information from large databases. It predicts future trends and finds behavior that the experts may miss as it lies beyond their expectations. Data Mining is part of a larger process called knowledge discovery, specifically, the step in which advanced statistical analysis and modeling techniques are applied to the data to find useful patterns and relationships.
Data warehousing takes a relatively simple idea and incorporates it into the technological underpinnings of a company. The idea is that a unified view of all data that a company collects will help improve operations. If hiring data can be combined with sales data, the idea is that it might be possible to discover and exploit patterns in the combined entity.
This paper will present an overview of the different process and advanced techniques involving in data mining and data warehousing.




1. Introduction to Data Mining:
Data mining can be defined as "a decision support process in which we search for patterns of information in data." This search may be done just by the user, i.e. just by performing queries, in which case it is quite hard and in most of the cases not comprehensive enough to reveal intricate patterns. Data mining uses sophisticated statistical analysis and modeling techniques to uncover such patterns and relationships hidden in organizational databases - patterns that ordinary methods might miss. Once found, the information needs to be presented in a suitable form, with graphs, reports, etc.
1.1 Data Mining Processes
From a process-oriented view, there are three classes of data mining activity: discovery, Predictive modeling and forensic analysis, as shown in figure below.
Discovery is the process of looking in a database to find hidden patterns without a
Predetermined idea or hypothesis about what the patterns may be. In other words, the
program takes the initiative in finding what the interesting patterns are, without the user
thinking of the relevant questions first.

Discovery is the process of looking in a database to find hidden patterns without a predetermined idea or hypothesis about what the patterns may be. In other words, the program takes the initiative in finding what the interesting patterns are, without the user thinking of the relevant questions first.
In predictive modeling patterns discovered from the database are used to predict the future. Predictive modeling thus allows the user to submit records with some unknown field values, and the system will guess the unknown values based on previous patterns discovered from the database. While discovery finds patterns in data, predictive modeling applies the patterns to guess values for new data items.
Forensic analysis:
This is the process of applying the extracted patterns to find anomalous or
unusual data elements. To discover the unusual, we first find what is the norm, and then
we detect those items that deviate from the usual within a given threshold. Discovery
helps us find "usual knowledge," but forensic analysis looks for unusual and specific
cases.
1.2 Data Mining Users and Activities
Data mining activities are usually performed by three different classes of users - executives, end users and analysts.
Executives need top-level insights and spend far less time with computers than the
other groups.
End users are sales people, market researchers, scientists, engineers, physicians,
etc.
Analysts may be financial analysts, statisticians, consultants, or database designers.
These users usually perform three types of data mining activity within a corporate environment: episodic, strategic and continuous data mining.
In episodic mining we look at data from one specific episode such as a specific direct marketing campaign. We may try to understand this data set, or use it for prediction on new marketing campaigns. Analysts usually perform episodic mining.
In strategic mining we look at larger sets of corporate data with the intention of gaining an overall understanding of specific measures such as profitability.
In continuous mining we try to understand how the world has changed within a given time period and try to gain an understanding of the factors that influence change.

1.3 Data Mining Applications:
Virtually any process can be studied, understood, and improved using data mining. The top three end uses of data mining are, not surprisingly, in the marketing area.
Data mining can find patterns in a customer database that can be applied to a prospect database so that customer acquisition can be appropriately targeted. For example, by identifying good candidates for mail offers or catalogs direct-mail marketers can reduce expenses and increase their sales. Targeting specific promotions to existing and potential customers offers similar benefits.
Market-basket analysis helps retailers understand which products are purchased together or by an individual over time. With data mining, retailers can determine which products to stock in which stores, and even how to place them within a store. Data mining can also help assess the effectiveness of promotions and coupons.
Another common use of data mining in many organizations is to help manage customer relationships. By determining characteristics of customers who are likely to leave for a competitor, a company can take action to retain that customer because doing so is usually far less expensive than acquiring a new customer.
Fraud detection is of great interest to telecommunications firms, credit-card companies, insurance companies, stock exchanges, and government agencies. The aggregate total for fraud losses is enormous. But with data mining, these companies can identify potentially fraudulent transactions and contain the damage.
Financial companies use data mining to determine market and industry characteristics as well as predict individual company and stock performance. Another interesting niche application is in the medical field: Data mining can help predict the effectiveness of surgical procedures, diagnostic tests, medications, service management, and process control.
1.4 Data Mining Techniques:
Data Mining has three major components Clustering or Classification, Association Rules and Sequence Analysis.


1.4.1 Classification:
The clustering techniques analyze a set of data and generate a set of grouping rules that can be used to classify future data. The mining tool automatically identifies the
clusters, by studying the pattern in the training data. Once the clusters are generated, classification can be used to identify, to which particular cluster, an input belongs. For example, one may classify diseases and provide the symptoms, which describe each class or subclass.
1.4.2 Association:
An association rule is a rule that implies certain association relationships among a set of objects in a database. In this process we discover a set of association rules at multiple levels of abstraction from the relevant set(s) of data in a database. For example, one may discover a set of symptoms often occurring together with certain kinds of diseases and further study the reasons behind them.
1.4.3 Sequential Analysis:
In sequential Analysis, we seek to discover patterns that occur in sequence. This
deals with data that appear in separate transactions (as opposed to data that appear in the same transaction in the case of association) e.g. if a shopper buys item A in the first week of the month, and then he buys item B in the second week etc.
1.4.4 Neural Nets and Decision Trees:
For any given problem, the nature of the data will affect the techniques you choose. Consequently, you'll need a variety of tools and technologies to find the best possible model. Classification models are among the most common, so the more popular ways for building them have been explained here. Classifications typically involve at least one of two workhorse statistical techniques - logistic regression (a generalization of linear regression) and discriminate analysis. However, as data mining becomes more common, neural nets and decision trees are also getting more consideration. Although complex in their own way, these methods require less statistical sophistication on the part of the user.
Neural nets use many parameters (the nodes in the hidden layer) to build a model that takes and combines a set of inputs to predict a continuous or categorical variable.

Source: "Introduction to Data Mining and Knowledge Discovery" by "Two Crows Corporation"

The value from each hidden node is a function of the weighted sum of the values from all the preceding nodes that feed into it. The process of building a model involves finding the connection weights that produce the most accurate results by "training" the neural net with data. The most common training method is back propagation, in which the output result is compared with known correct values. After each comparison, the weights are adjusted and a new result computed. After enough passes through the training data, the neural net typically becomes a very good predictor.
Decision trees represent a series of rules to lead to a class or value. For example, you may wish to classify loan applicants as good or bad credit risks. Figure below shows a simple decision tree that solves this problem. Armed with this tree and a loan application, a loan officer could determine whether an applicant is a good or bad credit risk. An individual with "Income > $40,000" and "High Debt" would be classified as a "Bad Risk," whereas an individual with "Income < $40,000" and "Job > 5 Years" would be classified as a "Good Risk."


Decision trees have become very popular because they are reasonably accurate and, unlike neural nets, easy to understand. Decision trees also take less time to build than neural nets. Neural nets and decision trees can also be used to perform regressions, and some types of neural nets can even perform clustering.
2.1 Introduction to Data warehousing:
In the current knowledge economy, it is now an indisputable fact that information is the key to organizations for gaining competitive advantage. Organizations very well know that the vital information for decision making is lying in its databases. Mountains of data are getting accumulated in various databases scattered around the enterprise. But the key to gaining competitive advantage lies in deriving insight and intelligence out of these data. Data warehousing helps in integrating categorizing, codifying and arranging the data from all parts of an enterprise.
According to Bill Inmon, known as the father of Data warehousing, The concept of data warehouse is depicted as figure




2.1.1 Subject oriented data:
All relevant data about a subject is gathered and stored as a single set in a useful format.
2.1.2 Integrated data:
Data is stored in a globally accepted fashion with consistent naming conventions, measurements, encoding structures and physical attributes, even when the underlying operational system store the data differently.


2.1.3 Non-volatile data:
The data warehouse is read-only, data is loaded in to the data warehouse and accesses there.
2.1.4 Time-variant data:
This long term data is from 5 to 10 years as opposed to the 30-60 days of operational data.
2.2 Structure of data warehouse:

The design of the data architecture is probably the most critical part of a data warehousing project. The key is to plan for growth and change, as opposed to trying to design the perfect system from the start. The design of the data architecture involves understanding all of the data and how different pieces are related. For example, payroll data might be related to sales data by the ID of the sales person, while the sales data might be related to customers by the customer ID. By connecting these two relationships, payroll data could be related to customers (e.g., which employees have ties to which customers).
Once the data architecture has been designed, you can then consider the kinds of reports that you are interested in. You might want to see a breakdown of employees by region, or a ranked list of customers by revenue. These kinds of reports are fairly simple. The power of a data warehouse becomes more obvious when you want to look at links between data associated with disparate parts of a organization (e.g., HR, accounts payable, and project management).
2.3 Benefits of Data warehousing:
Cost avoidance benefits.
Higher productivity.
Benefits through better analytical capability.
Manage business complexity.
Leverage on their existing investments.
End user spending.
Spending on e-business.
Accessibility and easy of use.
Real time information and analysis.
2.4 Techniques by different organization on efficient data warehouse:
That being said, most decisions to build data warehouses are driven by non-HR needs. Over the past decade, back office (supply chain) and front office (sales and marketing) organizations have spearheaded the creation of large corporate data warehouses. Improving the efficiency of the supply chain and competition for customers rely on the tactical uses that a data warehouse can provide. The key for other organizations, including HR, is to be involved in the creation of the warehouse so that their meets can be met by any resulting system.  This usually happens because both the data volume and question complexity have grown beyond what the current systems can handle. At that point the business becomes limited by the information that users can reasonably extract from the data system.
3.Conclusion:
Data mining offers great promise in helping organizations uncover hidden patterns in their data. However, data mining tools must be guided by users who understand the business, the data, and the general nature of the analytical methods involved. Realistic expectations can yield rewarding results across a wide range of applications, from improving revenues to reducing costs.
Building models is only one step in knowledge discovery. It's vital to collect and prepare the data properly and to check models against the real world. The "best" model is often found after building models of several different types and by trying out various technologies or algorithms.
The data mining area is still relatively young, and tools that support the whole of the data mining process in an easy to use fashion are rare. However, one of the most important issues facing researchers is the use of techniques against very large data sets. All the mining techniques are based on Artificial Intelligence, where they are generally executed against small sets of data, which can fit in memory. However, in data mining applications these techniques must be applied to data held in very large databases. These include use of parallelism and development of new database oriented techniques. However, much work is required before data mining can be successfully applied to large data sets. Only then will the true potential of data mining be able to be realized.
The data warehousing is the hottest concept for many software professionals to over come the sophisticated data to be managed efficiently. The data warehouse is repository (or archive) of information gathered from multiple sources, stored under a unified scheme, at a single site. Once gathered the data are stored for a long time permitting access to historical data. Thus, data ware houses provide the user a single consolidated interface to data, making decision support actions easier to implement. In the world of highly interconnected networks the data obtained or used by many companies would be very large and the maintenance becomes difficult and costly. So, the efficient data warehousing is to be implemented to obtain data from different branches (all over the world) and maintain it for providing information to all other branches (which does not have the concerned data).