Pages

Saturday 15 February 2014

Oracle Project Resource Management

Oracle Project Resource Management enables enterprises to:

Leverage Single Global Resource Pool
Deploy Resources Collaboratively
Monitor Resource Utilization
Streamline Organization Forecasting
Analyze Resource Demand and Supply









Major Features:
Oracle Project Resource Management enables companies to manage human resource deployment and capacity for project work.
Built using Oracle’s proven self-service model, Oracle Project Resource Management empowers key project stakeholders, such as project managers, resource managers, and staffing managers, to make optimal use of their single most critical asset: their people.
With this application, you can manage project resource needs, profitability and organization utilization by searching for, and locating and deploying most qualified people to your projects across your enterprise.
As a result, you can improve customer and employee satisfaction, maximize resource utilization and profitability, and increase your competitive advantage.
Oracle Project Resource Management is part of the E-Business Suite, an integrated set of applications engineered to work together.

Key Considerations while implementing Project Resource Management:
PJR should be used in very process centric organizations. Value out of the application is made out only when processes are followed and data is properly maintained. Resource search of candidates relies on 3 parameters- Resource availability, Skills Set and Job levels.

Segregation of resources eligible for Staffing - Project Resource Management maintains central resource pool that maintains data about resources availability, skill set, resume, address, job, email, work information, location etc. All the resources in an HR organization need not to be defined in resource pool. This refines the resource search and improve performance of the application.
Integration between Project Resource Management and Project Management - Out of box functionality in Projects module allows integration between Project resource management and Project Management. Integration can be achieved either by Bottom-up approach as well as Top-down approach.
Integration between Oracle Time and Labor and Project Resource Management - Objectives of the Project resource Management and Oracle time and labor totally different as one captures resource planning and other calculates actual. There is no out of box functionality available which integrates these two modules.

Non project centric and small organization - Project Resource Management is not that beneficial for small organizations where there are few projects, non skills based organizations, not much movement of resources across projects or where there are few resources. It is generally used for mid-sized and large-sized organizations.

Friday 14 February 2014

What is InfoSets in SAP BI 7.0 ?

Definition
Description of a specific kind of InfoProvider: InfoSet describes data sources that are defined as a rule as joins of DataStore objects, standard InfoCubes  and/or InfoObjects(characteristics with master data). A time-dependent join or temporal join is a join that contains an InfoObject that is a time-dependent characteristic.
An InfoSet is a semantic layer over the data sources.
Unlike the classic InfoSet, an InfoSet is a BI-specific view of data.
For more information see the following documentation: InfoProvidersClassic InfoSets.

What is InfoSets in SAP BI 7.0 ?

Use
With activated InfoSets you can define queries in the BI suite.
InfoSets allow you to report on several InfoProviders by using combinations of master data-bearing characteristics, InfoCubes and DataStore objects. The information is collected from the tables of the relevant InfoProviders. When an InfoSet is made up of several characteristics, you are able to map transitive attributes and report on this master data.

You create an InfoSet using the characteristics Business Partner (0BPARTNER) – Vendor (0VENDOR) – Business Name (0DBBUSNAME), and can then use the master data for reporting.
You can create time runs with an InfoSet by means of a temporal join (for this, see Temporal Joins). With all other types of BI object, the data is determined for the query key date, but with a temporal join in an InfoSet, you can specify a particular point in time at which you want the data to be evaluated. The key date of the query is not taken into consideration in the InfoSet.
Structure
You can include every DataStore object, every InfoCube and every InfoObject of the type Characteristic with Master Data in a join. A join can contain objects of the same object type, or objects of different object types. The individual objects can appear in a join any number of times. Join conditions connect the objects in a join to one another (equal join-condition). A join condition determines the combination of records from the individual objects that are included in the resulting set.


DataFlow of Process Chains in different SAP BW-BI

a. Delete Index
DataFlow of Process Chains in different SAP BW-BI
DataFlow of Process Chains in different SAP BW-BI

b. Data Transfer Process / Execute InfoPackage – optimally in parallel
c. Generate Index
d. Construct Database Statistics
e. Compress cube
f. Delete PSA (aged data requests)
g. Delete Change Log (aged data requests)



DataFlow of Process Chains in different SAP BW-BI
Steps for Process Chains in BI 7.0 for a Cube.
1. Start
2. Execute Infopackage
3. Delete Indexes for Cube
4.Execute DTP
5. Create Indexes for Cube

For DSO
1. Start
2. Execute Infopackage
3. Execute DTP
5. Activate DSO

For an IO
1. Start
2.Execute infopackage
3.Execute DTP
4.Attribute Change Run

Data to Cube thru a DSO
1. Start
2. Execute Infopackage ( loads till psa )
3.Execute DTP ( to load DSO from PSA )
4.Activate DSO
5.Further Processing
6.Delete Indexes for Cube
7.Execute DTP ( to load Cube from DSO )
8.Create Indexes for Cube
SAP BW 3.X

Master loading ( Attr, Text, Hierarchies )

Steps :

1.Start
2. Execute Infopackage (say if you are loading 2 IO's just have them all parallel )
3.You might want to load in seq - Attributes - Texts - Hierarchies
4.And ( Connecting all Infopackages )
5.Attribute Change Run ( add all relevant IO's ).


1. Start
2. Delete Indexes for Cube
3. Execute Infopackage
4.Create Indexes for Cube
For DSO

1. Start
2. Execute Infopackage
3. Activate DSO

For an IO

1.Start
2.Execute infopackage
3.Attribute Change Run

Data to Cube thru a DSO
1. Start
2. Execute Infopackage (Loads till PSA)
3.Activate DSO
5.Further Processing
6.Delete Indexes for Cube
7.Execute Infopackage
8.Create Indexes forCube


Thursday 13 February 2014

Overview of Hadoop Applications

Hadoop is nothing but a resource of software framework which is generally used in the processing immense data simultaneously across many servers. In the recent years, it has turned out to be one of most feasible option for businesses, which has the never-ending requirement to save and manage all the data. The web based businesses like Amazon, Facebook, Yahoo and eBay have used high-end Hadoop applications to deal with their large data sets. It is assumed that Hadoop is relevant to both small and large organizations.

Overview of Hadoop Applications
Overview of Hadoop Applications

Hadoop is capable to process a large chunk of data in a lesser time which allows the companies to analyze that this was not possible earlier within that predetermined time. Another significant advantage of Hadoop applications is the cost effectiveness. One can remove the high cost involved in the software licenses and the charge that has to be upgraded periodically when using anything except Hadoop. It is extremely recommended for the businesses, which have to work with large amount of data, to go for Hadoop applications as it tries in fixing any problem.

The Hadoop applications are divided into two parts; one is the HDFS other one is Hadoop map. HDFS stands for Hadoop Distributed File System. Hadoop map helps in processing data and scheduling job depending on priorities. Apart from these two components there are nine more parts of Hadoop. There are three most familiar functions of Hadoop applications. The first function is the storage of data and analysis of all the data. It does not need the loading of the RDBMS. It is used in the adaptation of huge repository of semi structured and unstructured data. Such complex data are very hard to understand in SQL tools like data mining and analyzing the graph.

There are several numbers of institutes which offer Hadoop training. BigClasses also provide Hadoop online training with flexible class timings, outstanding online sessions. Our Hadoop online training will explain you how it is implemented in web related businesses and in social network sites. To join free demo classes on Hadoop online training and to know more about our Hadoop training reach us at

For more details call us at 09948030675.
See more at: http://www.sryitsolutions.com/
https://www.facebook.com/sryit?ref=hl

Wednesday 12 February 2014

How to Be a Successful SAP ABAP Developer

ABAP development is very critical in addressing any solution gaps, custom development on any SAP project. It is very important to know a lot of diverse programming aspects during a SAP accomplishment project and follow certain guidelines that can make an SAP ABAP professional very successful in your career. To be a successful ABAP developer you should have through knowledge on it. For this, you need good SAP ABAP training.
https://www.facebook.com/sryit?ref=hl

How to Be a Successful SAP ABAP Developer
How to Be a Successful SAP ABAP Developer

The following are the steps for being efficient SAP ABAP Programmer -

Step 1: The first part of any ABAP development project starts with meeting the users or business experts and understand the business needs that need to be implemented in SAP system. The best approach is to conduct a workshop to gather all the business needs. After all the business requirements are gathered, a SAP functional consultant or a business expert will write down a complete functional specification. A well defined functional specification must include test case scenarios or UML diagram.

Step 2: In ideal case, the ABAP Development Manager should have created a programming benchmark and guidelines document. You need to review this document. You can learn the naming conventions for function modules, dictionary objects, classes, name spaces, different software components, proxies, program input and output parameters etc.

 Step3: Test case documents are written by the functional SAP consultants in most of the SAP implementation projects. But on some implementation projects a programmer may be needed to write test cases. Before writing a test case reviews the functional specification document is thoroughly checked the written test case with the users.

Step 4: You need to read the functional specification and create the list of all the development objects that would be required to implement the specified functionality in the SAP system. You need to draw a flowchart and review with experts. The technical design document should comprise technical overview, ABAP objects that can be reused, list of new database objects, data model and class diagram. Step 5: In this step you should realize the specification of the ABAP development.

Step 6: A good SAP ABAP development practice is followed throughout the development lifecycle of the project.

Step 7: One should check and test code after completion. You need to verify the results are same as the expected or not using specified test cases.

Step 8: You need to write a user document with all the functionalities.

Step 9: User Acceptance Testing

Step 10: Migration to SAP Test System You have just gained the knowledge how to be an efficient SAP ABAP Developer. There are several institutes who provide SAP ABAP online training. SRYITSOLUTIONS is one of them; which offers SAP ABAP online training with less course fee. Our SAP ABAP training is very flexible. Before opting this, you can experience our free demo classes on SAP ABAP online training.

For more details call us at 09948030675.

See more at: http://www.sryitsolutions.com/

Teradata –How to secure sensitive information

Teradata :

One thing makes guarding sensitive data which is more difficult and more expensive also. It is a real fact that many companies have this kind of information spread throughout their entire network including internet and intranet on a wide range of systems. Since most encryption explanations will not work on every kind of system, this makes protecting the data more hard. So, companies need to find multiple options to keep their sensitive and expensive data secured. Then they have to keep information in Access, Oracle or DB2, Teradata systems and also use SQL.

Teradata –How to secure sensitive information
Teradata –How to secure sensitive information

Contact us for more details and Session Schedules at:
  IND: +91- 9948030675, 
USA: +1-319-804-4998
Email: info@sryitsolutions.com, 

Security Features of  Teradata Database:

Security, as a feature of IT control requirements, defines a trait of information systems, and includes specific policy-based mechanisms and assurances for protecting the confidentiality and integrity of information, the availability of critical services and, indirectly, privacy. Data in a data warehouse must be protected at both ends of a transaction for both user and enterprise.

Data warehouse security requires protection of the database, the server on which it resides, and appropriate network access controls. Teradata highly recommends that customers implement appropriate network perimeter security controls (e.g., firewalls, gateways etc.) to protect network access to a data warehouse. Additionally, for dataware house systems deployed on Windows based operating systems.

If you want to know more on this feature of Teradata and your dream is to build your career in TeraData domain, then several institutes are there which provide Tera Data training. But SRYIT Solutions is providing the best TeraData training. The best feature of our training is, we are providing Teradata online training for the learners.Teradata online training is effective because if its flexible timings and one can learn it form anywhere. On only the Teradata online training, we also provide Teradata corporate training if there is a need.


New Features of Informatica 9 | SRYIT Solutions

There is a great demand for Informatica training because of Informatica 9 has introduced some new features in the current market. This article is going to describe these features of Informatica 9.

New Features of Informatica 9 | SRYIT Solutions
New Features of Informatica 9 | SRYIT Solutions
  • Informatica 9 supports data integration for cloud. One can integrate data in the cloud applications, as well as run this version of Informatica on cloud infrastructure.
  • Informatica 9 introduced a new tool – Informatica analyst.
  • There is difference in the architecture of Informatica 9. This is more effective than the previous architecture.
  • It supports browser based tool for business analysis.
  • It also supports Data steward.
  • It permits unified administration along with a new admin console which enables user to manage power centre and also power exchange from the same console.
  • It has powerful new capabilities for the data quality.
  • It offers single admin console for power centre, data quality, data services and power exchange.
  • In this version of Informatica, IDQ (Informatica data quality) has been integrated with the Informatica Platform, performance, reusability and manageability all are significantly enhanced.
  • The mappings rules are also shared.
  • Both the sql and web services can be used for the real time dash boarding.
  • Informatica data quality offers worldwide address validation support and with integrated geo coding.
  • The capability to define rules and view and execute profiles is available in both the Informatica developer and Informatica analyst also.
  • The developer tool is eclipse based here and supports both data integration and data quality for increasing the productivity.
  • Informatica has the potential to pull data from IMS, DB2 on series and from other legacy systems environment like Datacom, VSAM and IDMS etc.
  • Different tools are available for different roles in Informatica 9.
  • It does not comprise ESB infrastructure.
  • Informatica 9 supports open interfaces.
  • Informatica 9 complements the existing BI architectures by giving immediate access to the data through data virtualization.
  • It supports profiling of Mainframe data.

Here the dashboards are planned for business executives.
We offer Informatica online training to the learners throughout the globe. If you wish to learn more on it you can join our Informatica training. Our Informatica online training is very interactive. All our Informatica trainers are highly qualified and experienced. They are the main strength of our Informatica online training. For more details on Informatica training contact us at +080 08 527566

Tuesday 11 February 2014

Types of ETL Testing

ETL testing is classified into four different sections irrespective of technology or the ETL tools used:

Types of ETL Testing

1. New Data Warehouse Testing
It is built and verified from the scratch. In this case the data input is taken from the customer requirements and different data sources and new data warehouse is build and verified with the help of ETL tools.
Responsibilities:
• Business Analyst gathers and documents the requirements
• Infrastructure people sets up the test environments
• QA Testers develop and then execute the test plans and test scripts
• Developers perform unit tests of each modules
• Database Administrators test for the performance and also for the stress
• Users do functional tests including UAT (User Acceptance Tests)

2. Migration Testing
In this case customer will have an existing data warehouse and ETL performing the job. But customers are looking to bag new tool to improve efficiency. It includes these following steps:

Design and validation test
Setting up the test environment
Executing the validation test (it depends on test design and data migration process)
Reporting the bugs

3. Change RequestIn this case new data is added from various sources to an existing datawarehouse. There might be a condition where customer requireschanging their present business rule or they might integrate new rule.

4. Report TestingThe reports are the end result of Data Warehouse. Reports must be tested by validating data, layout in the report.Reports are an important resource for creatingvital business decisions, from the basic reports to all sorts of drilldown reports that let users slice and filter data the way they need.

There is a great demand for ETL testing training. Many young professionals are intended to optETL testing online training, because they do not have much time to join classroom training. There are several training institutes which are offering ETL testing training. We provide ETL testing online training to the aspirants. After completion of your course you can join as Junior Tester. If you are interested to learn ETL testing, call us today for our free demo classes on ETL testing online training.

SAP BODS INTERVIEW QUATIONS AND ANSWERS

   1. What is the use of BusinessObjects Data Services?

   Answer:
BusinessObjects Data Services provides a graphical interface that allows you to easily create jobs that extract data fromheterogeneous sources, transform that data to meet the business requirements of your organization, and load the data into a single location. 
SAP BODS INTERVIEW QUATIONS AND ANSWERS

Contact us for more details and Session Schedules at:
  IND: +91- 9948030675, 
USA: +1-319-804-4998
Email: info@sryitsolutions.com, 
                                                                Web: http://www.sryitsolutions.com/ 
   2. Define Data Services components.
   Answer:
Data Services includes the following standard components:
  • Designer
  • Repository
  • Job Server
  • Engines
  • Access Server
  • Adapters
  • Real-time Services
  • Address Server
  • Cleansing Packages, Dictionaries, and Directories
  • Management Console
   3. What are the steps included in Data integration process?
    Answer:
  • Stage data in an operational datastore, data warehouse, or data mart.
  • Update staged data in batch or real-time modes.
  • Create a single environment for developing, testing, and deploying the entire data integration platform.
  • Manage a single metadata repository to capture the relationships between different extraction and access methods and provide integrated lineage and impact analysis.
   4. Define the terms Job, Workflow, and Dataflow
   Answer:
  • A job is the smallest unit of work that you can schedule independently for execution.
  • A work flow defines the decision-making process for executing data flows.
  • Data flows extract, transform, and load data. Everything having to do with data, including reading sources, transforming data, and loading targets, occurs inside a data flow.
   5. Arrange these objects in order by their hierarchy: Dataflow, Job, Project, and Workflow.
   Answer
Project, Job, Workflow, Dataflow. 
   6. What are reusable objects in DataServices?
   Answer:
Job, Workflow, Dataflow.
   7. What is a transform?
    Answer:
 A transform enables you to control how datasets change in a dataflow. 
   8. What is a Script?
   Answer:
A script is a single-use object that is used to call functions and assign values in a workflow.
   9. What is a real time Job?    
Answer:
Real-time jobs "extract" data from the body of the real time message received and from any secondary sources used in the job.
  10. What is an Embedded Dataflow?
  Answer:
An Embedded Dataflow is a dataflow that is called from inside another dataflow.
  11. What is the difference between a data store and a database?
  Answer:
A datastore is a connection to a database.
  12. How many types of datastores are present in Data services?
   Answer:
Three.
  • Database Datastores: provide a simple way to import metadata directly froman RDBMS.
  • Application Datastores: let users easily import metadata frommost Enterprise Resource Planning (ERP) systems.
  • Adapter Datastores: can provide access to an application’s data and metadata or just metadata.
  13. What is the use of Compace repository?
   Answer:
Remove redundant and obsolete objects from the repository tables.
  14. What are Memory Datastores?
  Answer:
Data Services also allows you to create a database datastore using Memory as the Database type. Memory Datastores are designed to enhance processing performance of data flows executing in real-time jobs.
  15. What are file formats?
   Answer:
A file format is a set of properties describing the structure of a flat file (ASCII). File formats describe the metadata structure. File format objects can describe files in:
  • Delimited format — Characters such as commas or tabs separate each field.
  • Fixed width format — The column width is specified by the user.
  • SAP ERP and R/3 format.
  16. Which is NOT a datastore type?
  Answer:
 File Format
  17. What is repository? List the types of repositories.
   Answer:
The DataServices repository is a set of tables that holds user-created and predefined system objects, source and target metadata, and transformation rules. There are 3 types of repositories.
  • A local repository
  • A central repository
  • A profiler repository
  18. What is the difference between a Repository and a Datastore?
  Answer:
A Repository is a set of tables that hold system objects, source and target metadata, and transformation rules. A Datastore is an actual connection to a database that holds data.
  19. What is the difference between a Parameter and a Variable?
   Answer:
A Parameter is an expression that passes a piece of information to a work flow, data flow or custom function when it is called in a job. A Variable is a symbolic placeholder for values.
   20. When would you use a global variable instead of a local variable?
   Answer:
  • When the variable will need to be used multiple times within a job.
  • When you want to reduce the development time required for passing values between job components.
  • When you need to create a dependency between job level global variable name and job components.
   21. What is Substitution Parameter?
   Answer:
The Value that is constant in one environment, but may change when a job is migrated to another environment.
   22. List some reasons why a job might fail to execute?
  Answer:
Incorrect syntax, Job Server not running, port numbers for Designer and Job Server not matching.
   23. List factors you consider when determining whether to run work flows or data flows serially or in parallel?
   Answer:

     Consider the following: 
  • Whether or not the flows are independent of each other
  • Whether or not the server can handle the processing requirements of flows running at the same time (in parallel)
   24. What does a lookup function do? How do the different variations of the lookup function differ?
   Answer:
All lookup functions return one row for each row in the source. They differ in how they choose which of several matching rows to return.
   25. List the three types of input formats accepted by the Address Cleanse transform.
   Answer:
Discrete, multiline, and hybrid.
   26. Name the transform that you would use to combine incoming data sets to produce a single output data set with the same schema as the input data sets.
Answer:
The Merge transform.
   27. What are Adapters?
   Answer:
Adapters are additional Java-based programs that can be installed on the job server to provide connectivity to other systems such as Salesforce.com or the JavaMessagingQueue. There is also a Software Development Kit (SDK) to allow customers to create adapters for custom applications.
   28. List the data integrator transforms
   Answer:
  • Data_Transfer
  • Date_Generation
  • Effective_Date
  • Hierarchy_Flattening
  • History_Preserving
  • Key_Generation
  • Map_CDC_Operation
  • Pivot Reverse Pivot
  • Table_Comparison
  • XML_Pipeline
   29. List the Data Quality Transforms
   Answer:
  • Global_Address_Cleanse
  • Data_Cleanse
  • Match
  • Associate
  • Country_id
  • USA_Regulatory_Address_Cleanse
   30. What are Cleansing Packages?
   Answer:
These are packages that enhance the ability of Data Cleanse to accurately process various forms of global data by including language-specific reference data and parsing rules.
   31. What is Data Cleanse?
   Answer:
The Data Cleanse transform identifies and isolates specific parts of mixed data, and standardizes your data based on information stored in the parsing dictionary, business rules defined in the rule file, and expressions defined in the pattern file.
   32. What is the difference between Dictionary and Directory?
   Answer:
Directories provide information on addresses from postal authorities. Dictionary files are used to identify, parse, and standardize data such as names, titles, and firm data.
  33. Give some examples of how data can be enhanced through the data cleanse transform, and describe the benefit of those enhancements.
   Answer:
  • Enhancement Benefit
  • Determine gender distributions and target
  • Gender Codes marketing campaigns
  • Provide fields for improving matching
  • Match Standards results
  34. A project requires the parsing of names into given and family, validating address information, and finding duplicates across several systems. Name the transforms needed and the task they will perform.
   Answer:
  • Data Cleanse: Parse names into given and family.
  • Address Cleanse: Validate address information.
  • Match: Find duplicates.
  35. Describe when to use the USA Regulatory and Global Address Cleanse transforms.
   Answer:
Use the USA Regulatory transform if USPS certification and/or additional options such as DPV and Geocode are required. Global Address Cleanse should be utilized when processing multi-country data.
  36. Give two examples of how the Data Cleanse transform can enhance (append) data.
  Answer:
The Data Cleanse transform can generate name match standards and greetings. It can also assign gender codes and prenames such as Mr. and Mrs.
   37. What are name match standards and how are they used?
   Answer:
Name match standards illustrate the multiple ways a name can be represented.They are used in the match process to greatly increase match results.
  38. What are the different strategies you can use to avoid duplicate rows of data when re-loading a job.
   Answer:
  • Using the auto-correct load option in the target table.
  • Including the Table Comparison transform in the data flow.
  • Designing the data flow to completely replace the target table during each execution.
  • Including a preload SQL statement to execute before the table loads.
  39. What is the use of Auto Correct Load?
   Answer:
It does not allow duplicated data entering into the target table.It works like Type 1 Insert else Update the rows based on Non-matching and matching data respectively.
   40. What is the use of Array fetch size?
   Answer:
Array fetch size indicates the number of rows retrieved in a single request to a source database. The default value is 1000. Higher numbers reduce requests, lowering network traffic, and possibly improve performance. The maximum value is 5000
   41. What are the difference between Row-by-row select and Cached comparison table and sorted input in Table Comparison Tranform?
   Answer:
  • Row-by-row select —look up the target table using SQL every time it receives an input row. This option is best if the target table is large.
  • Cached comparison table — To load the comparison table into memory. This option is best when the table fits into memory and you are comparing the entire target table
  • Sorted input — To read the comparison table in the order of the primary key column(s) using sequential read.This option improves performance because Data Integrator reads the comparison table only once.Add a query between the source and the Table_Comparison transform. Then, from the query’s input schema, drag the primary key columns into the Order By box of the query.
   42. What is the use of using Number of loaders in Target Table?
   Answer:
Number of loaders loading with one loader is known as Single loader Loading. Loading when the number of loaders is greater than one is known as Parallel Loading. The default number of loaders is 1. The maximum number of loaders is 5.
   43. What is the use of Rows per commit?
   Answer:
Specifies the transaction size in number of rows. If set to 1000, Data Integrator sends a commit to the underlying database every 1000 rows.
  44. What is the difference between lookup (), lookup_ext () and lookup_seq ()?
  Answer:
  • lookup() : Briefly, It returns single value based on single condition
  • lookup_ext(): It returns multiple values based on single/multiple condition(s)
  • lookup_seq(): It returns multiple values based on sequence number
  45. What is the use of History preserving transform?
  Answer:
The History_Preserving transform allows you to produce a new row in your target rather than updating an existing row. You can indicate in which columns the transform identifies changes to be preserved. If the value of certain columns change, this transform creates a new row for each row flagged as UPDATE in the input data set.
  46. What is the use of Map-Operation Transfrom?
  Answer:
The Map_Operation transform allows you to change operation codes on data sets to produce the desired output. Operation codes: INSERT UPDATE, DELETE, NORMAL, or DISCARD.
   47. What is Heirarchy Flatenning?
   Answer:
Constructs a complete hierarchy from parent/child relationships, and then produces a description of the hierarchy in vertically or horizontally flattened format.
  • Parent Column, Child Column
  • Parent Attributes, Child Attributes.
   48. What is the use of Case Transform?
   Answer:
Use the Case transform to simplify branch logic in data flows by consolidating case or decision-making logic into one transform. The transformallows you to split a data set into smaller sets based on logical branches.
   49. What must you define in order to audit a data flow?
   Answer:
You must define audit points and audit rules when you want to audit a data flow.
   50. List some factors for PERFORMANCE TUNING in data services?
   Answer:
The following sections describe ways you can adjust Data Integrator performance
  • Source-based performance options
  • Using array fetch size
  • Caching data
  • Join ordering
  • Minimizing extracted data
  • Target-based performance options
  • Loading method and rows per commit
  • Staging tables to speed up auto-correct loads
  • Job design performance options
  • Improving throughput
  • Maximizing the number of pushed-down operations
  • Minimizing data type conversion
  • Minimizing locale conversion
  • Improving Informix repository performance

Monday 10 February 2014

SAP Modules - Ultimate Guide

A SAP system is divided into modules like MM, SD which maps business process of that particular department or business unit.

SAP Modules - Ultimate Guide


Following is the list of module available in SAP system.
  1. SAP FI Module- FI stands for Financial Accounting
  2. SAP CO Module- CO stands for Controlling
  3. SAP PS Module - and PS is Project Systems
  4. SAP HR Module - HR stands for Human Resources
  5. SAP PM Module - where Plant Maintenance is the PM
  6. SAP MM Module - MM is Materials Management -
  7. SAP QM Module -  QM stands for Quality Management
  8. SAP PP Module - PP  is Production Planning
  9. SAP SD Module - SD is Sales and Distribution
  10. SAP BW Module - where BW stands for Business (Data) Warehouse
  11. SAP  EC Module - where EC stands for Enterprise Controlling
  12. SAP TR Module - where TR stands for Treasury
  13. SAP    IM Module - where IM stands for Investment Management
  14. SAP - IS where IS stands for Industries specific solution
  15. SAP - Basis
  16. SAP - ABAP
  17. SAP - Cross Application Components
  18. SAP - CRM where CRM stands for Customer Relationship Management
  19. SAP - SCM where SCM stands for Supply Chain Management
  20. SAP - PLM where PLM stands for Product LifeCycle Management
  21. SAP - SRM where SRM stands for Supplier Relationship Management
  22. SAP - CS where CS stands for Customer Service
  23. SAP - SEM where SEM stands for STRATEGIC ENTERPRISE MANAGEMENT
  24. SAP - RE where RE stands for Real Estate

Following Video explains why SAP is divided into Modules

Please be patient . Video will load in some time. If you still face issue viewing video clickhere

Video Transcript with Key Takeaways Highlighted:
  • One of the principle reasons why SAP is so popular is that it is very flexible and customizable. It is said that if you have the time and money you can make SAP software to drive your car on autopilot
  • One way to achieve this flexibility is to break SAP system into different Modules like HR, Finance and so on which emulate business processes of that particular department or Business Unit
  • You can integrate one module with other or even third party interfaces.
  • Now depending upon your organization, you can have just module, or a few, or all the modules of SAP implemented. Also you can have integration with Third Party Systems
  • It is also possible to integrate modules from different ERP Vendors. So you can integrate
  • PP Module from SAP ,with HR Module of PeopleSoft
  • The various SAP Modules available are
  • Financial Modules like Financial Accounting , Controlling etc.
  • Logistics Modules like Materials , Sales etc.
  • Human Resource Management Modules. Human Resource Module will emulate HR related business processes like hiring , appraisals , termination etc .
  • Like wise Financial Accounting  which will emulate Finance related business processes and manage financial data likewise
  • Cross Application Modules, which essentially integrate SAP with other software applications
  • For our learning purposes lets focus on SAP HR module. SAP- HR provides comprehensive business processes, which map all HR activities in an enterprise.
  • The various sub modules or functionalities supported by SAP - HR is Recruitment
  • Training & Development, Time Management, Employee Benefits, Payroll, Travel
  • Cost Planning ,Reporting, ESS & MSS
  • We will look into the details of the sub-modules later in the trainings

Introduction to ABAP

BAP stands for - Advanced Business Application Programming.It is  a programming language for developing applications for the SAP R/3 system.

The latest version of ABAP is  called  ABAP Objects and supports object-oriented programming. SAP will run applications written using ABAP/4, the earlier ABAP version, as well as applications using ABAP Objects.

Without further adieu , lets dive into ABAP.

Note, this tutorial will not go into extensive details on ABAP language constructs (which become very boring to read ) but quickly introduce key concepts to get you started so you can focus your attention on more important topics.

Data Types

Syntax to declare a variable in ABAP -
DATA Variable_Name Type Variable_Type
Example:
DATA employee_number Type I.
The following is a list of Data Types supported by ABAP
Data TypeInitial field lengthValid field lengthInitial valueMeaning
Numeric types
I440Integer (whole number)
F880Floating point number
P81 - 160Packed number
Character types
C11 - 65535... 'Text field(alphanumeric characters)
D88'00000000'Date field(Format: YYYYMMDD)
N11 - 65535'0 ... 0'Numeric text field(numeric characters)
T66'000000'Time field(format: HHMMSS)
Hexadecimal type
X11 - 65535X'0 ... 0'Hexadecimal field
Processing Data - Assigning Values
a=16.

move 16 to a.

write a to b.
- Arithmetic Operations
compute a = a*100.

Control Statements

Following control statements can be used - - If ... EndIf Loop
if [not] exp [ and / or [not] exp ].
........
[elseif exp.
.......]
[else.
.......]
Endif.
- Case statement
Case variable.
when value1.
.........
when value2.
.........
[ when others.
.........]
Endcase.
Do.
-While loop

While <logical expression>.
.....
.....
Endwhile.
- Do loop

Do <n> times.
.....
.....
Enddo.

Logical Operator

A list of logical operators
  • GE or >=
  • GT or >
  • LE or <=
  • LT or <
  • EQ or =
  • NE or <>

ABAP/4 Editor

Finally , here is where you will spent most of your time as a developer creating / modifying programs. Transaction SE38
Introduction to ABAP