Informatica Interview 4

Design a mapping to convert column data into row data without using the normalizer transformation.
The source data looks like
col1, col2, col3
a, b, c
d, e, f


The target table data should look like
Col
a
b
c
d
e
f

Create three expression transformations with one port each. Connect col1 from Source Qualifier to port in first expression transformation. Connect col2 from Source Qualifier to port in second expression transformation. Connect col3 from source qualifier to port in third expression transformation. Create a union transformation with three input groups and each input group should have one port. Now connect the expression transformations to the input groups and connect the union transformation to the target table.

Design a mapping to convert row data into column data.
The source data looks like
id, value
10, a
10, b
10, c
20, d
20, e
20, f


The target table data should look like
id, col1, col2, col3
10, a, b, c
20, d, e, f


Step1: Use sorter transformation and sort the data using id port as the key. Then connect the sorter transformation to the expression transformation.

Step2: In the expression transformation, create the ports and assign the expressions as mentioned below.

id
value
V_curr_id=id
V_count= IIF(v_curr_id=V_prev_id,V_count+1,1)
V_prev_id=id
O_col1= IIF(V_count=1,value,NULL)
O_col2= IIF(V_count=2,value,NULL)
O_col3= IIF(V_count=3,value,NULL)


Step3: Connect the expression transformation to aggregator transformation. In the aggregator transforamtion, create the ports and assign the expressions as mentioned below.

id (specify group by on this port)
O_col1
O_col2
O_col3
col1=MAX(O_col1)
col2=MAX(O_col2)
col3=MAX(O_col3)


Stpe4: Now connect the ports id, col1, col2, col3 from aggregator transformation to the target table.



Click below links for more Informatica Interview Questions and Answers 
Informatica Interview 1
Informatica Interview 2
Informatica Interview 3

Informatica Interview 2

Name four output files that informatica server creates during session running?
  • Session Log
  • Workflow Log
  • Errors Log
  • Badfile
Why we use stored procedure transformation?
A stored procedure transformation is an important tool for populating and maintaing databases.

What is the difference between static cache and dynamic cache?
Dynamic cache decreases the performance in comparision to static cache.
Static cache do not see such things just insert data as many times as it is coming

Define maping and sessions?
Maping: It is a set of source and target definitions linked by transformation objects that define the rules for transformation.
Session : It is a set of instructions that describe how and when to move data from source to targets.

What is a command that used to run a batch?
pmcmd is used to start a batch.

What is Datadriven?
The informatica server follows instructions coded into update strategy transformations with in the session maping determine how to flag records for insert, update, delete or reject.

What is power center repository?
The PowerCenter repository allows you to share metadata across repositories to create a data mart domain.

What is parameter file?
A parameter file is a file created by text editor such as word pad or notepad. You can define the following values in parameter file.
  • Maping parameters
  • Maping variables
  • Session parameters.
What are the types of lookup caches?
  • Static cache: You can configure a static or readonly cache for only lookup table. By default, informatica server creates a static cache. It caches the lookup table and lookup values in the cache for each row that comes into the transformation. When the lookup condition is true, the informatica server does not update the cache while it processes the lookup transformation.
  • Dynamic cacheIf you want to cache the target table and insert new rows into cache and the target, you can create a look up transformation to use dynamic cache.The informatica server dynamically inserts data to the target table.
  • Persistent cache: You can save the lookup cache files and reuse them the next time the informatica server processes a lookup transformation configured to use the cache.
  • Shared cache: You can share the lookup cache between multiple transactions. You can share unnamed cache between transformations in the same mapping.
  • RecacheIf the persistent cache is not synchronized with he lookup table, you can configure the lookup transformation to rebuild the lookup cache.
What are Stored Procedure transformation?
Stored Procedure transformation is an Passive & Connected or UnConnected transformation. It is useful to automate time-consuming tasks and it is also used in error handling, to drop and recreate indexes and to determine the space in database, a specialized calculation.

What is fact table?
The centralized table in a star schema is called as fact table. Fact tables are three types
  • AdditiveAdditive facts are facts that can be summed up through all of the dimensions in the fact table.
  • Non-Additive: Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.
  • Semi Additive: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others.
What are Source Qualifier transformation?
Source Qualifier transformation is an Active and Connected transformation. When adding a relational or a flat file source definition to a mapping, it is must to connect it to a Source Qualifier transformation. The Source Qualifier performs the various tasks such as overriding default SQL query, filtering records; join data from two or more tables etc.

What is difference between maplet and reusable transformation?
Maplet consists of set of transformations that is reusable. A reusable transformation is a single transformation that can be reusable.

What is Update Strategy transformation?
Update strategy transformation is an active and connected transformation. It is used to update data in target table, either to maintain history of data or recent changes. You can specify how to treat source rows in table, insert, update, delete or data driven.

How many types of dimensions are available in informatica?
There are three types of dimensions.
  • Star SchemaA star schema is the one in which a central fact table is sourrounded by denormalized dimensional tables. A star schema can be simple or complex. A simple star schema consists of one fact table where as a complex star schema have more than one fact table.
  • Snowflake SchemaA snow flake schema is an enhancement of star schema by adding additional dimensions. Snow flake schema are useful when there are low cardinality attributes in the dimensions.
  • Galaxy SchemaGalaxy schema contains many fact tables with some common dimensions (conformed dimensions). This schema is a combination of many data marts.
What are different types of parsing?
  • Quick parsing
  • Thorough parsing
How do you handle large datasets?
By Using Bulk utility mode at the session level and if possible by disabling constraints after consulting with DBA; Using Bulk utility mode would mean that no writing is taking place in Roll Back Segment so loading is faster. However the pitfall is that recovery is not possible.

What are the limitations of handling long datatypes?
When the length of a datatype (e.g varchar2(4000)) goes beyond 4000, Informatica makes this as varchar2(2000).

What are the types of OLAP?

  • ROLAP (Relational OLAP) - Users see their data organized in cubes and dimensions but the data is really stored in RDBMS. The performance is slow. A storage mode that uses tables in a relational database to store multidimensional structures.
  • MOLAP (Multidimensional OLAP) - Users see their data organized in cubes and dimensions but the data is really stored in MDBMS. Query performance is fast.
  • HOLAP (Hybrid OLAP) - It is a combination of ROLAP and HOLAP. EG: HOLOs. In this one will find data queries on aggregated data as well as detailed data.
What is Meta data?
Data about the data, contains the location and description of data warehouse system components such as name, definitions and end user views.

How does the recovery mode work in informatica?
In case of load failure an entry is made in OPB_SERV_ENTRY(?) table from where the extent of loading can be determined.

What is Aggregate Awareness?
Aggregate awareness is a feature of DESIGNER that makes use of aggregate tables in a database. These are tables that contain pre-calculated data. The purpose of these tables is to enhance the performance of SQL transactions; they are thus used to speed up the execution of queries.

When should you use a star schema and when a snowflake schema?
A star schema is a simplest data warehouse schema. Snowflake schema is similar to the star schema. It normalizes dimension table to save data storage space. It can be used to represent hierarchies of information.

What parameters can be tweaked to get better performance from a session?
DTM shared memory, Index cache memory, Data cache memory, by indexing, using persistent cache, increasing commit interval etc.

Why we are going for surrogate keys?
  • Data tables in various source systems may use different keys for the same entity.
  • Keys may change or be reused in the source data systems.
  • Changes in organizational structures may move keys in the hierarchy.
When is more convenient to join in the database or in Informatica?
  • Definitely at the database level
  • at the source Qualifier query itself
  • rather than using Joiner transformation
How do you measure session performance?
By checking Collect performance Data check box.

What is Dimension Table?
It contains data used to reference data stored in the fact table.
  • Fewer rows
  • Primarily character data
  • One primary key (dimensional key)
  • Updatable data
What is a database connection?
A connection is a set of parameters that provides access to an RDBMS. These parameters include system information such as the data account, user identification, and the path to the database. Designer provides three types of connections: secured, shared, and personal.

What are all the types of dimensions?
  • Informational Dimension
  • Structural Dimension
  • Categorical Dimension
  • Partitioning Dimension


Click below links for more Informatica Interview Questions and Answers 



Informatica Interview 3

How to generate sequence numbers using expression transformation?
In the expression transformation, create a variable port and increment it by 1. Then assign the variable port to an output port. In the expression transformation, the ports are:
V_count=V_count+1
O_count=V_count

Design a mapping to load the first 3 rows from a flat file into a target?
You have to assign row numbers to each record. Generate the row numbers either using the expression transformation as mentioned above or use sequence generator transformation.
Then pass the output to filter transformation and specify the filter condition as O_count <=3



Design a mapping to load the last 3 rows from a flat file into a target?
Consider the source has the following data.
col
a
b
c
d
e



Step1: You have to assign row numbers to each record. Generate the row numbers using the expression transformation as mentioned above and call the row number generated port as O_count. Create a DUMMY output port in the same expression transformation and assign 1 to that port. So that, the DUMMY output port always return 1 for each row.

In the expression transformation, the ports are
V_count=V_count+1
O_count=V_count
O_dummy=1


The output of expression transformation will be
col, o_count, o_dummy
a, 1, 1
b, 2, 1
c, 3, 1
d, 4, 1
e, 5, 1

Step2: Pass the output of expression transformation to aggregator and do not specify any group by condition. Create an output port O_total_records in the aggregator and assign O_count port to it. The aggregator will return the last row by default. The output of aggregator contains the DUMMY port which has value 1 and O_total_records port which has the value of total number of records in the source.

In the aggregator transformation, the ports are
O_dummy
O_count
O_total_records=O_count


The output of aggregator transformation will be
O_total_records, O_dummy
5, 1

Step3: Pass the output of expression transformation, aggregator transformation to joiner transformation and join on the DUMMY port. In the joiner transformation check the property sorted input, then only you can connect both expression and aggregator to joiner transformation.

In the joiner transformation, the join condition will be
O_dummy (port from aggregator transformation) = O_dummy (port from expression transformation)

The output of joiner transformation will be
col, o_count, o_total_records
a, 1, 5
b, 2, 5
c, 3, 5
d, 4, 5
e, 5, 5

Step4: Now pass the ouput of joiner transformation to filter transformation and specify the filter condition as O_total_records (port from aggregator)-O_count(port from expression) <=2

In the filter transformation, the filter condition will be
O_total_records - O_count <=2

The output of filter transformation will be
col o_count, o_total_records
c, 3, 5
d, 4, 5
e, 5, 5

Design a mapping to load the first record from a flat file into one table A, the last record from a flat file into table B and the remaining records into table C? 
This is similar to the above problem; the first 3 steps are same. In the last step instead of using the filter transformation, you have to use router transformation. In the router transformation create two output groups.

In the first group, the condition should be O_count=1 and connect the corresponding output group to table A. In the second group, the condition should be O_count=O_total_records and connect the corresponding output group to table B. The output of default group should be connected to table C.



Consider the following products data which contain duplicate rows.
A
B
C
C
B
D
B



Q1. Design a mapping to load all unique products in one table and the duplicate rows in another table.
The first table should contain the following output
A
D


The second target should contain the following output
B
B
B
C
C

sorter transformation and sort the products data. Pass the output to an expression transformation and create a dummy port O_dummy and assign 1 to that port. So that, the DUMMY output port always return 1 for each row.

The output of expression transformation will be
Product, O_dummy
A, 1
B, 1
B, 1
B, 1
C, 1
C, 1
D, 1

Pass the output of expression transformation to an aggregator transformation. Check the group by on product port. In the aggreagtor, create an output port O_count_of_each_product and write an expression count(product).

The output of aggregator will be
Product, O_count_of_each_product
A, 1
B, 3
C, 2
D, 1

Now pass the output of expression transformation, aggregator transformation to joiner transformation and join on the products port. In the joiner transformation check the property sorted input, then only you can connect both expression and aggregator to joiner transformation.

The output of joiner will be
product, O_dummy, O_count_of_each_product
A, 1, 1
B, 1, 3
B, 1, 3
B, 1, 3
C, 1, 2
C, 1, 2
D, 1, 1

Now pass the output of joiner to a router transformation, create one group and specify the group condition as O_dummy=O_count_of_each_product. Then connect this group to one table. Connect the output of default group to another table.

Q2. Design a mapping to load each product once into one table and the remaining products which are duplicated into another table.
The first table should contain the following output
A
B
C
D


The second table should contain the following output
B
B
C

Use sorter transformation and sort the products data. Pass the output to an expression transformation and create a variable port,V_curr_product, and assign product port to it. Then create a V_count port and in the expression editor write IIF(V_curr_product=V_prev_product, V_count+1,1). Create one more variable port V_prev_port and assign product port to it. Now create an output port O_count port and assign V_count port to it.

In the expression transformation, the ports are
Product
V_curr_product=product
V_count=IIF(V_curr_product=V_prev_product,V_count+1,1)
V_prev_product=product
O_count=V_count


The output of expression transformation will be
Product, O_count
A, 1
B, 1
B, 2
B, 3
C, 1
C, 2
D, 1

Now Pass the output of expression transformation to a router transformation, create one group and specify the condition as O_count=1. Then connect this group to one table. Connect the output of default group to another table.



Consider the following employees data as source
employee_id, salary
10, 1000
20, 2000
30, 3000
40, 5000


Q1. Design a mapping to load the cumulative sum of salaries of employees into target table?
The target table data should look like as
employee_id, salary, cumulative_sum
10, 1000, 1000
20, 2000, 3000
30, 3000, 6000
40, 5000, 11000

Connect the source Qualifier to expression transformation. In the expression transformation, create a variable port V_cum_sal and in the expression editor write V_cum_sal+salary. Create an output port O_cum_sal and assign V_cum_sal to it.

Q2. Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null.
The output should look like as
employee_id, salary, pre_row_salary
10, 1000, Null
20, 2000, 1000
30, 3000, 2000
40, 5000, 3000

Connect the source Qualifier to expression transformation. In the expression transformation, create a variable port V_count and increment it by one for each row entering the expression transformation. Also create V_salary variable port and assign the expression IIF(V_count=1,NULL,V_prev_salary) to it . Then create one more variable port V_prev_salary and assign Salary to it. Now create output port O_prev_salary and assign V_salary to it. Connect the expression transformation to the target ports.

In the expression transformation, the ports will be

employee_id
salary
V_count=V_count+1
V_salary=IIF(V_count=1,NULL,V_prev_salary)
V_prev_salary=salary
O_prev_salary=V_salary


Q3. Design a mapping to get the next row salary for the current row. If there is no next row for the current row, then the next row salary should be displayed as null.
The output should look like as
employee_id, salary, next_row_salary
10, 1000, 2000
20, 2000, 3000
30, 3000, 5000
40, 5000, Null



Step1: Connect the source qualifier to two expression transformation. In each expression transformation, create a variable port V_count and in the expression editor write V_count+1. Now create an output port O_count in each expression transformation. In the first expression transformation, assign V_count to O_count. In the second expression transformation assign V_count-1 to O_count.

In the first expression transformation, the ports will be

employee_id
salary
V_count=V_count+1
O_count=V_count

In the second expression transformation, the ports will be

employee_id
salary
V_count=V_count+1
O_count=V_count-1

Step2: Connect both the expression transformations to joiner transformation and join them on the port O_count. Consider the first expression transformation as Master and second one as detail. In the joiner specify the join type as Detail Outer Join. In the joiner transformation check the property sorted input, then only you can connect both expression transformations to joiner transformation.

Step3: Pass the output of joiner transformation to a target table. From the joiner, connect the employee_id, salary which are obtained from the first expression transformation to the employee_id, salary ports in target table. Then from the joiner, connect the salary which is obtained from the second expression transformaiton to the next_row_salary port in the target table.

Q4. Design a mapping to find the sum of salaries of all employees and this sum should repeat for all the rows.
The output should look like as
employee_id, salary, salary_sum
10, 1000, 11000
20, 2000, 11000
30, 3000, 11000
40, 5000, 11000
Step1: Connect the source qualifier to the expression transformation. In the expression transformation, create a dummy port and assign value 1 to it.

In the expression transformation, the ports will be

employee_id
salary
O_dummy=1


Step2: Pass the output of expression transformation to aggregator. Create a new port O_sum_salary and in the expression editor write SUM(salary). Do not specify group by on any port.

In the aggregator transformation, the ports will be

salary
O_dummy
O_sum_salary=SUM(salary)


Step3: Pass the output of expression transformation, aggregator transformation to joiner transformation and join on the DUMMY port. In the joiner transformation check the property sorted input, then only you can connect both expression and aggregator to joiner transformation.

Step4: Pass the output of joiner to the target table.





Click below links for more Informatica Interview Questions and Answers


Informatica Interview 1
Informatica Interview 2
Informatica Interview 4


Informatica Interview 1

What is Data warehouse? 
A Data warehouse is a subject oriented, integrated ,time variant, non volatile collection of data in support of management’s decision making process.

What are the types of data warehouses? 
There are three types of data warehouses:
  1. Enterprise Data Warehouse
  2. ODS (operational data store)
  3. Data Mart
What is Data mart? 
A data mart is a subset of data warehouse that is designed for a particular line of business, such as sales, marketing, or finance. In a dependent data mart, data can be derived from an enterprise wide data warehouse. In an independent data mart can be collected directly from sources. 

What is star schema? 
A star schema is the simplest form of data warehouse schema that consists of one or more dimensional and fact tables. 

What is snow flake schema? 
A Snowflake schema is nothing but one Fact table which is connected to a number of dimension tables, The snowflake and star schema are methods of storing data which are multidimensional in nature. 

What are ETL Tools? 
ETL Tools are stands for Extraction, Transformation, and Loading the data into the data warehouse for decision making. ETL refers to the methods involved in accessing and manipulating source data and loading it into target database. 

What are Dimensional table? 
Dimension tables contain attributes that describe fact records in the fact table. 

What is Data Modelling? 
Data Modeling is representing the real world set of data structures or entities and their relationship in their of data models, required for a database. Data Modelling consists of various types like : 
  • Conceptual data modeling
  • Logical data modeling
  • Physical data modeling
  • Enterprise data modeling
  • Relation data modeling
  • Dimensional data modeling.
 What is Surrogate key? 
Surrogate key is a substitution for the natural primary key. It is just a unique identifier or number of each row that can be used for the primary key to the table. 

What is Data Mining? 
A Data Mining is the process of analyzing data from different perpectives and summarizing it into useful information. 

What is Operational Data Store? 
A ODS is an operational data store which comes as a second layer in a datawarehouse architecture. It has got the characteristics of both OLTP and DSS systems. 

What is the Difference between OLTP and OLAP? 
OLTP is nothing but OnLine Transaction Processing which contains a normalised tables. 
But OLAP(Online Analtical Programming) contains the history of OLTP data which is non-volatile acts as a Decisions Support System. 

How many types of dimensions are available in Informatica? 
There are three types of dimensions available are :
  1. Junk dimension
  2. Degenerative Dimension
  3. Conformed Dimension
What is Difference between ER Modeling and Dimensional Modeling? 
ER Modeling is used for normalizing the OLTP database design. Dimesional Modeling is used for de-normalizing the ROLAP / MOLAP design. 

What is the maplet? 
Maplet is a set of transformations that you build in the maplet designer and you can use in multiple mapings. 

What is Session and Batches? 
A session is a set of commands that describes the server to move data to the target. A Batch is set of tasks that may include one or more numbar of tasks (sessions, ewent wait, email, command, etc). 

What are slowly changing dimensions? 
Dimensions that change overtime are called Slowly Changing Dimensions(SCD).
  • Slowly Changing Dimension-Type1 : Which has only current records.
  • Slowly Changing Dimension-Type2 : Which has current records + historical records.
  • Slowly Changing Dimension-Type3 : Which has current records + one previous records.
What are 2 modes of data movement in Informatica Server? 
There are two modes of data movement are:
  1. Normal Mode in which for every record a separate DML stmt will be prepared and executed.
  2. Bulk Mode in which for multiple records DML stmt will be preapred and executed thus improves performance. 
What is the difference between Active and Passive transformation? 
An Active Transformation can change the number of rows that pass through it from source to target i.e it eliminates rows that do not meet the condition in transformation.
A Passive Transformation does not change the number of rows that pass through it i.e it passes all rows through the transformation. 

What is the difference between connected and unconnected transformation? 
Connected transformation is connected to other transformations or directly to target table in the mapping.
An Unconnected Transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation. 

What are Aggregator Transformation? 
Aggregator transformation is an Active and Connected transformation. This transformation is useful to perform calculations such as averages and sums (mainly to perform calculations on multiple rows or groups). 

What are Expression transformation? 
Expression transformation is a Passive and Connected transformation. This can be used to calculate values in a single row before writing to the target. 

What are Filter transformation? 
Filter transformation is an Active and Connected transformation. This can be used to filter rows in a mapping that do not meet the condition. 

What are Joiner transformation? 
Joiner Transformation is an Active and Connected transformation. This can be used to join two sources coming from two different locations or from same location. 

Why we use lookup transformations? 
Lookup Transformations can access data from relational tables that are not sources in mapping. 

What are Normalizer transformation? 
Normalizer Transformation is an Active and Connected transformation. It is used mainly with COBOL sources where most of the time data is stored in denormalized format. Also, Normalizer transformation can be used to create multiple rows from a single row of data.  

What are Rank transformation? 
Rank transformation is an Active and Connected transformation. It is used to select the top or bottom rank of data. 

What are Router transformation? 
Router transformationis an Active and Connected transformation. It is similar to filter transformation. The only difference is, filter transformation drops the data that do not meet the condition whereas router has an option to capture the data that do not meet the condition. It is useful to test multiple conditions. 

What are Sorter transformation? 
Sorter transformation is a Connected and an Active transformation. It allows to sort data either in ascending or descending order according to a specified field. 




Click below links for more Informatica Interview Questions and Answers
Informatica Interview 2
Informatica Interview 3
Informatica Interview 4

Oracle APPS DBA Interview 2


How do I check from the server if the forms server is running or not?
It can be checked by querying process f60ctl. If this is running, that means forms server is running.

How do you relink f60webmx?
It can be done by using the below command:
adrelink.sh force=y "fnd f60webmx"

What is FORM60_TIMEOUT?
It is the environment setting, which refresh to the maximum idle time before f60webmx shuts down.

I am able to launch direct forms, but its hanging when launching it through php. How can I fix it?
Check the profile option "ICX: Forms Launcher". Set forms URL properly here.

What is ICX: Forms Launcher system profile used for?
It is used by the self service WEB applications personal home page (also known as ICX) to determine the base URL needed to launch an application.

What should be the value of ICX: Forms Launcher?
It should be set to http://hostname:port/dev60cgi/f60cgi?

How do we check form version?
Open the form whose version has to be checked. Go to menu bar, select Help -> About Oracle Applications. You will find the form version near the bottom under the heading FORMS.

What is the difference between forms running in socket mode and servlet mode?
When the forms are run on socket mode, there will be a dedicated connection between the client desktop and the forms server. Whereas, when the forms are started in servlet mode, the forms request are processed by Jserv.

How can we find if forms are running in socket or servlet mode?
It can be checked by querying f60 process. It will show the mode in which form is running.

What are different types of concurrent manager?
There are several types of concurrent manager. Important ones:
  • Internal Manager
  • Standard Manager.
  • Conflict Resolution Manager
  • Custom Manager
What is Internal Concurrent Manager?
It is responsible for controlling all other concurrent managers. Its main task is to make sure that all other concurrent managers are up and running. It controls other manager through the requests made to the service manager. It also starts, stops and restarts the service manager for all nodes.

What is Conflict Resolution Manager (CRM)?
It takes care of resolving the program incompatibilities and checks if a request in queue can be run in parallel with the running request. If a program is identified as run alone, then it prevents concurrent managers from starting other programs in the same conflict domain.

What is Standard Manager?
Standard Manager is the master concurrent manager. It is always running and can take care of processing any concurrent request. If at all, no other manager is assigned to a program, that program will be picked by standard manager.

How can we enable/disable Conflict Resolution Manager?
It can be done using profile options "Concurrent: Use ICM". Set it to "Y" to enable Conflict Resolution manager. To disable it, set the profile option to "N".

How do I process more concurrent requests in parallel?
It can be done by either increasing the number of target process for the manager, or by changing the cache size of the manager.

What happens if the Internal Concurrent Manager dies? Are all the managers also killed immediately?

No. If the Internal manager dies, the requests continue to run normally except for queue control requests.


Does the ICM run or schedule any request by itself?
No. It does not run or schedule any request. Its function is only to run 'queue control' requests, which are requests to start or stop other manager.

If the ICM goes down, do I need to kill all the managers before restarting the ICM?
No. If the ICM goes down you need not kill all the managers. You can simply restart it using startmgr.

Can a concurrent manager be deleted?
Yes. It can deleted. Query the concurrent manager in defined concurrent manager form and then delete the row.

What is Internal Manager?
It is used for Parallel Concurrent Processing (PCP). It monitors whether ICM is running or not, and if not it will start it on other node.





Click below links for more Oracle APPS DBA Interview Questions and Answers 
Oracle APPS DBA Interview 1

Java Interview 5

Define Serialization? What do you mean by Serialization in Java?
Serialization is a mechanism by which you can save or transfer the state of an object by converting it to a byte stream. This can be done in java by implementing Serialiazable interface. Serializable is defined as a marker interface which needs to be implemented for transferring an object over a network or persistence of its state to a file. Since its a marker interface, it does not contain any methods. Implementation of this interface enables the conversion of object into byte stream and thus can be transferred. The object conversion is done by the JVM using its default serialization mechanism.

Why is Serialization required? What is the need to Serialize?
Serialization is required for a variety of reasons. It is required to send across the state of an object over a network by means of a socket. One can also store an object’s state in a file. Additionally, manipulation of the state of an object as streams of bytes is required. The core of Java Serialization is the Serializable interface. When Serializable interface is implemented by your class it provides an indication to the compiler that java Serialization mechanism needs to be used to serialize the object.

What is the Difference between Externalizable and Serializable Interfaces?
This is one of top serialization questions that is asked in many big companies to test your in-depth understanding of serialization. Serializable is a marker interface therefore you are not forced to implement any methods, however Externalizable contains two methods readExternal() and writeExternal() which must be implemented. Serializable interface provides a inbuilt serialization mechanism to you which can be in-efficient at times. However Externilizable interface is designed to give you greater control over the serialization mechanism. The two methods provide you immense opportunity to enhance the performance of specific object serialization based on application needs. Serializable interface provides a default serialization mechanism, on the other hand, Externalizable interface instead of relying on default Java Serialization provides flexibility to control this mechanism. One can drastically improve the application performance by implementing the Externalizable interface correctly. However there is also a chance that you may not write the best implementation, so if you are not really sure about the best way to serialize, I would suggest your stick to the default implementation using Serializable interface.

When will you use Serializable or Externalizable interface? and why?
Most of the times when you want to do a selective attribute serialization you can use Serializable interface with transient modifier for variables not to be serialized. However, use of Externalizable interface can be really effective in cases when you have to serialize only some dynamically selected attributes of a large object. Lets take an example, Some times when you have a big Java object with hundreds of attributes and you want to serialize only a dozen dynamically selected attributes to keep the state of the object you should use Externalizable interface writeObject method to selectively serialize the chosen attributes. In case you have small objects and you know that most or all attributes are required to be serialized then you should be fine with using Serializable interface and use of transient variable as appropriate.

What are the ways to speed up Object Serialization? How to improve Serialization performance?
The default Java Serialization mechanism is really useful, however it can have a really bad performance based on your application and business requirements. The serialization process performance heavily depends on the number and size of attributes you are going to serialize for an object. Below are some tips you can use for speeding up the marshaling and un-marshaling of objects during Java serialization process.
o    Mark the unwanted or non Serializable attributes as transient. This is a straight forward benefit since your attributes for serialization are clearly marked and can be easily achieved using Serialzable interface itself.
o    Save only the state of the object, not the derived attributes. Some times we keep the derived attributes as part of the object however serializing them can be costly. Therefore consider calcualting them during de-serialization process.
o    Serialize attributes only with NON-default values. For examples, serializing a int variable with value zero is just going to take extra space however, choosing not to serialize it would save you a lot of performance. This approach can avoid some types of attributes taking unwanted space. This will require use of Externalizable interface since attribute serialization is determined at runtime based on the value of each attribute.
o    Use Externalizable interface and implement the readObject and writeObject methods to dynamically identify the attributes to be serialized. Some times there can be a custom logic used for serialization of various attributes.

What is a Serial Version UID (serialVersionUID) and why should I use it? How to generate one?
The serialVersionUID represents your class version, and you should change it if the current version of your class is not backwards compatible with its earlier versions. This is extract from Java API Documentation
The serialization runtime associates with each serializable class a version number, called a serialVersionUID, which is used during deserialization to verify that the sender and receiver of a serialized object have loaded classes for that object that are compatible with respect to serialization.
Most of the times, we probably do not use serialization directly. In such cases, I would suggest to generate a default serializable uid by clicking the quick fix option in eclipse.

What would happen if the SerialVersionUID of an object is not defined?
If you don't define serialVersionUID in your serilizable class, Java compiler will make one by creating a hash code using most of your class attributes and features. When an object gets serialized, this hash code is stamped on the object which is known as the SerialVersionUID of that object. This ID is required for the version control of an object. SerialVersionUID can be specified in the class file also. In case, this ID is not specified by you, then Java compiler will regenerate a SerialVersionUID based on updated class and it will not be possible for the already serialized class to recover when a class field is added or modified. Its recommended that you always declare a serialVersionUID in your Serializable classes.

Does setting the serialVersionUID class field improve Java serialization performance?
Declaring an explicit serialVersionUID field in your classes saves some CPU time only the first time the JVM process serializes a given Class. However the gain is not significant, In case when you have not declared the serialVersionUID its value is computed by JVM once and subsequently kept in a soft cache for future use.

What are the alternatives to Serialization? If Serialization is not used, is it possible to persist or transfer an object using any other approach?
In case, Serialization is not used, Java objects can be serialized by many ways, some of the popular methods are listed below:
o    Saving object state to database, this is most common technique used by most applications. You can use ORM tools (e.g. hibernate) to save the objects in a database and read them from the database.
o    Xml based data transfer is another popular mechanism, and a lot of XML based web services use this mechanism to transfer data over network. Also a lot of tools save XML files to persist data/configurations.
o    JSON Data Transfer - is recently popular data transfer format. A lot of web services are being developed in JSON due to its small footprint and inherent integration with web browser due to JavaScript format.

What are transient variables? What role do they play in Serialization process?
The transient keyword in Java is used to indicate that a field should not be serialized. Once the process of de-serialization is carried out, the transient variables do not undergo a change and retain their default value. Marking unwanted fields as transient can help you boost the serialization performance.


Why does serialization NOT save the value of static class attributes? Why static variables are not serialized?
The Java variables declared as static are not considered part of the state of an object since they are shared by all instances of that class. Saving static variables with each serialized object would have following problems
o    It will make redundant copy of same variable in multiple objects which makes it in-efficient.
o    The static variable can be modified by any object and a serialized copy would be stale or not in sync with current value.

How to Serialize a collection in java? How to serialize a ArrayList, Hashmap or Hashset object in Java?
All standard implementations of collections List, Set and Map interface already implement java.io.Serializable. All the commonly used collection classes like java.util.ArrayList, java.util.Vector, java.util.Hashmap, java.util.Hashtable, java.util.HashSet, java.util.TreeSet do implement Serializable. This means you do not really need to write anything specific to serialize collection objects. However you should keep following things in mind before you serialize a collection object - Make sure all the objects added in collection are Serializable. - Serializing the collection can be costly therefore make sure you serialize only required data isntead of serializing the whole collection. - In case you are using a custom implementation of Collection interface then you may need to implement serialization for it.

Is it possible to customize the serialization process? How can we customize the Serialization process?
Yes, the serialization process can be customized. When an object is serialized, objectOutputStream.writeObject (to save this object) is invoked and when an object is read, ObjectInputStream.readObject () is invoked. What most people do not know is that Java Virtual Machine provides you with an option to define these methods as per your needs. Once this is done, these two methods will be invoked by the JVM instead of the application of the default serialization process.

How to Serialize a collection in java? How to serialize a ArrayList, Hashmap or Hashset object in Java?
All standard implementations of collections List, Set and Map interface already implement java.io.Serializable. All the commonly used collection classes like java.util.ArrayList, java.util.Vector, java.util.Hashmap, java.util.Hashtable, java.util.HashSet, java.util.TreeSet do implement Serializable. This means you do not really need to write anything specific to serialize collection objects. However you should keep following things in mind before you serialize a collection object - Make sure all the objects added in collection are Serializable. - Serializing the collection can be costly therefore make sure you serialize only required data isntead of serializing the whole collection. - In case you are using a custom implementation of Collection interface then you may need to implement serialization for it.

Is it possible to customize the serialization process? How can we customize the Serialization process?
Yes, the serialization process can be customized. When an object is serialized, objectOutputStream.writeObject (to save this object) is invoked and when an object is read, ObjectInputStream.readObject () is invoked. What most people do not know is that Java Virtual Machine provides you with an option to define these methods as per your needs. Once this is done, these two methods will be invoked by the JVM instead of the application of the default serialization process.


What changes are compatible and incompatible to the mechanism of java Serialization?
This is one of a difficult tricky questions and answering this correctly would mean you are an expert in Java Serialization concept. 

In an already serialized object, the most challenging task is to change the structure of a class when a new field is added or removed. As per the specifications of Java Serialization, addition of any method or field is considered to be a compatible change whereas changing of class hierarchy or non-implementation of Serializable interface is considered to be a non-compatible change. You can go through the Java serialization specification for the extensive list of compatible and non-compatible changes. If a serialized object need to be compatible with an older version, it is necessary that the newer version follows some rules for compatible and incompatible changes. A compatible change to the implementing class is one that can be applied to a new version of the class, which still keeps the object stream compatible with older version of same class.
 Some Simple Examples of compatible changes are:
o    Addition of a new field or class will not affect serialization, since any new data in the stream is simply ignored by older versions. the newly added field will be set to its default values when the object of an older version of the class is un marshaled.
o    The access modifiers change (like private, public, protected or default) is compatible since they are not reflected in the serialized object stream.
o    Changing a transient field to a non-transient field is compatible change since it is similar to adding a field.
o    Changing a static field to a non-static field is compatible change since it is also similar to adding a field.
Some Simple Examples of incompatible changes are:
  • o    Changing implementation from Serializable to Externalizable interface can not be done since this will result in the creation of an incompatible object stream.
  • o    Deleting a existing Serializable fields will cause a problem.
  • o    Changing a non-transient field to a transient field is incompatible change since it is similar to deleting a field.
  • o    Changing a non-static field to a static field is incompatible change since it is also similar to deleting a field.
  • o    Changing the type of a attribute within a class would be incompatible, since this would cause a failure when attempting to read and convert the original field into the new field.
  • o    Changing the package of class is incompatible. Since the fully-qualified class name is written as part of the object byte stream.

Click below links for more JAVA Interview Questions and Answers