Pages

Saturday 15 February 2014

Oracle Project Resource Management

Oracle Project Resource Management enables enterprises to:

Leverage Single Global Resource Pool
Deploy Resources Collaboratively
Monitor Resource Utilization
Streamline Organization Forecasting
Analyze Resource Demand and Supply









Major Features:
Oracle Project Resource Management enables companies to manage human resource deployment and capacity for project work.
Built using Oracle’s proven self-service model, Oracle Project Resource Management empowers key project stakeholders, such as project managers, resource managers, and staffing managers, to make optimal use of their single most critical asset: their people.
With this application, you can manage project resource needs, profitability and organization utilization by searching for, and locating and deploying most qualified people to your projects across your enterprise.
As a result, you can improve customer and employee satisfaction, maximize resource utilization and profitability, and increase your competitive advantage.
Oracle Project Resource Management is part of the E-Business Suite, an integrated set of applications engineered to work together.

Key Considerations while implementing Project Resource Management:
PJR should be used in very process centric organizations. Value out of the application is made out only when processes are followed and data is properly maintained. Resource search of candidates relies on 3 parameters- Resource availability, Skills Set and Job levels.

Segregation of resources eligible for Staffing - Project Resource Management maintains central resource pool that maintains data about resources availability, skill set, resume, address, job, email, work information, location etc. All the resources in an HR organization need not to be defined in resource pool. This refines the resource search and improve performance of the application.
Integration between Project Resource Management and Project Management - Out of box functionality in Projects module allows integration between Project resource management and Project Management. Integration can be achieved either by Bottom-up approach as well as Top-down approach.
Integration between Oracle Time and Labor and Project Resource Management - Objectives of the Project resource Management and Oracle time and labor totally different as one captures resource planning and other calculates actual. There is no out of box functionality available which integrates these two modules.

Non project centric and small organization - Project Resource Management is not that beneficial for small organizations where there are few projects, non skills based organizations, not much movement of resources across projects or where there are few resources. It is generally used for mid-sized and large-sized organizations.

Friday 14 February 2014

What is InfoSets in SAP BI 7.0 ?

Definition
Description of a specific kind of InfoProvider: InfoSet describes data sources that are defined as a rule as joins of DataStore objects, standard InfoCubes  and/or InfoObjects(characteristics with master data). A time-dependent join or temporal join is a join that contains an InfoObject that is a time-dependent characteristic.
An InfoSet is a semantic layer over the data sources.
Unlike the classic InfoSet, an InfoSet is a BI-specific view of data.
For more information see the following documentation: InfoProvidersClassic InfoSets.

What is InfoSets in SAP BI 7.0 ?

Use
With activated InfoSets you can define queries in the BI suite.
InfoSets allow you to report on several InfoProviders by using combinations of master data-bearing characteristics, InfoCubes and DataStore objects. The information is collected from the tables of the relevant InfoProviders. When an InfoSet is made up of several characteristics, you are able to map transitive attributes and report on this master data.

You create an InfoSet using the characteristics Business Partner (0BPARTNER) – Vendor (0VENDOR) – Business Name (0DBBUSNAME), and can then use the master data for reporting.
You can create time runs with an InfoSet by means of a temporal join (for this, see Temporal Joins). With all other types of BI object, the data is determined for the query key date, but with a temporal join in an InfoSet, you can specify a particular point in time at which you want the data to be evaluated. The key date of the query is not taken into consideration in the InfoSet.
Structure
You can include every DataStore object, every InfoCube and every InfoObject of the type Characteristic with Master Data in a join. A join can contain objects of the same object type, or objects of different object types. The individual objects can appear in a join any number of times. Join conditions connect the objects in a join to one another (equal join-condition). A join condition determines the combination of records from the individual objects that are included in the resulting set.


DataFlow of Process Chains in different SAP BW-BI

a. Delete Index
DataFlow of Process Chains in different SAP BW-BI
DataFlow of Process Chains in different SAP BW-BI

b. Data Transfer Process / Execute InfoPackage – optimally in parallel
c. Generate Index
d. Construct Database Statistics
e. Compress cube
f. Delete PSA (aged data requests)
g. Delete Change Log (aged data requests)



DataFlow of Process Chains in different SAP BW-BI
Steps for Process Chains in BI 7.0 for a Cube.
1. Start
2. Execute Infopackage
3. Delete Indexes for Cube
4.Execute DTP
5. Create Indexes for Cube

For DSO
1. Start
2. Execute Infopackage
3. Execute DTP
5. Activate DSO

For an IO
1. Start
2.Execute infopackage
3.Execute DTP
4.Attribute Change Run

Data to Cube thru a DSO
1. Start
2. Execute Infopackage ( loads till psa )
3.Execute DTP ( to load DSO from PSA )
4.Activate DSO
5.Further Processing
6.Delete Indexes for Cube
7.Execute DTP ( to load Cube from DSO )
8.Create Indexes for Cube
SAP BW 3.X

Master loading ( Attr, Text, Hierarchies )

Steps :

1.Start
2. Execute Infopackage (say if you are loading 2 IO's just have them all parallel )
3.You might want to load in seq - Attributes - Texts - Hierarchies
4.And ( Connecting all Infopackages )
5.Attribute Change Run ( add all relevant IO's ).


1. Start
2. Delete Indexes for Cube
3. Execute Infopackage
4.Create Indexes for Cube
For DSO

1. Start
2. Execute Infopackage
3. Activate DSO

For an IO

1.Start
2.Execute infopackage
3.Attribute Change Run

Data to Cube thru a DSO
1. Start
2. Execute Infopackage (Loads till PSA)
3.Activate DSO
5.Further Processing
6.Delete Indexes for Cube
7.Execute Infopackage
8.Create Indexes forCube


Thursday 13 February 2014

Overview of Hadoop Applications

Hadoop is nothing but a resource of software framework which is generally used in the processing immense data simultaneously across many servers. In the recent years, it has turned out to be one of most feasible option for businesses, which has the never-ending requirement to save and manage all the data. The web based businesses like Amazon, Facebook, Yahoo and eBay have used high-end Hadoop applications to deal with their large data sets. It is assumed that Hadoop is relevant to both small and large organizations.

Overview of Hadoop Applications
Overview of Hadoop Applications

Hadoop is capable to process a large chunk of data in a lesser time which allows the companies to analyze that this was not possible earlier within that predetermined time. Another significant advantage of Hadoop applications is the cost effectiveness. One can remove the high cost involved in the software licenses and the charge that has to be upgraded periodically when using anything except Hadoop. It is extremely recommended for the businesses, which have to work with large amount of data, to go for Hadoop applications as it tries in fixing any problem.

The Hadoop applications are divided into two parts; one is the HDFS other one is Hadoop map. HDFS stands for Hadoop Distributed File System. Hadoop map helps in processing data and scheduling job depending on priorities. Apart from these two components there are nine more parts of Hadoop. There are three most familiar functions of Hadoop applications. The first function is the storage of data and analysis of all the data. It does not need the loading of the RDBMS. It is used in the adaptation of huge repository of semi structured and unstructured data. Such complex data are very hard to understand in SQL tools like data mining and analyzing the graph.

There are several numbers of institutes which offer Hadoop training. BigClasses also provide Hadoop online training with flexible class timings, outstanding online sessions. Our Hadoop online training will explain you how it is implemented in web related businesses and in social network sites. To join free demo classes on Hadoop online training and to know more about our Hadoop training reach us at

For more details call us at 09948030675.
See more at: http://www.sryitsolutions.com/
https://www.facebook.com/sryit?ref=hl

Wednesday 12 February 2014

How to Be a Successful SAP ABAP Developer

ABAP development is very critical in addressing any solution gaps, custom development on any SAP project. It is very important to know a lot of diverse programming aspects during a SAP accomplishment project and follow certain guidelines that can make an SAP ABAP professional very successful in your career. To be a successful ABAP developer you should have through knowledge on it. For this, you need good SAP ABAP training.
https://www.facebook.com/sryit?ref=hl

How to Be a Successful SAP ABAP Developer
How to Be a Successful SAP ABAP Developer

The following are the steps for being efficient SAP ABAP Programmer -

Step 1: The first part of any ABAP development project starts with meeting the users or business experts and understand the business needs that need to be implemented in SAP system. The best approach is to conduct a workshop to gather all the business needs. After all the business requirements are gathered, a SAP functional consultant or a business expert will write down a complete functional specification. A well defined functional specification must include test case scenarios or UML diagram.

Step 2: In ideal case, the ABAP Development Manager should have created a programming benchmark and guidelines document. You need to review this document. You can learn the naming conventions for function modules, dictionary objects, classes, name spaces, different software components, proxies, program input and output parameters etc.

 Step3: Test case documents are written by the functional SAP consultants in most of the SAP implementation projects. But on some implementation projects a programmer may be needed to write test cases. Before writing a test case reviews the functional specification document is thoroughly checked the written test case with the users.

Step 4: You need to read the functional specification and create the list of all the development objects that would be required to implement the specified functionality in the SAP system. You need to draw a flowchart and review with experts. The technical design document should comprise technical overview, ABAP objects that can be reused, list of new database objects, data model and class diagram. Step 5: In this step you should realize the specification of the ABAP development.

Step 6: A good SAP ABAP development practice is followed throughout the development lifecycle of the project.

Step 7: One should check and test code after completion. You need to verify the results are same as the expected or not using specified test cases.

Step 8: You need to write a user document with all the functionalities.

Step 9: User Acceptance Testing

Step 10: Migration to SAP Test System You have just gained the knowledge how to be an efficient SAP ABAP Developer. There are several institutes who provide SAP ABAP online training. SRYITSOLUTIONS is one of them; which offers SAP ABAP online training with less course fee. Our SAP ABAP training is very flexible. Before opting this, you can experience our free demo classes on SAP ABAP online training.

For more details call us at 09948030675.

See more at: http://www.sryitsolutions.com/

Teradata –How to secure sensitive information

Teradata :

One thing makes guarding sensitive data which is more difficult and more expensive also. It is a real fact that many companies have this kind of information spread throughout their entire network including internet and intranet on a wide range of systems. Since most encryption explanations will not work on every kind of system, this makes protecting the data more hard. So, companies need to find multiple options to keep their sensitive and expensive data secured. Then they have to keep information in Access, Oracle or DB2, Teradata systems and also use SQL.

Teradata –How to secure sensitive information
Teradata –How to secure sensitive information

Contact us for more details and Session Schedules at:
  IND: +91- 9948030675, 
USA: +1-319-804-4998
Email: info@sryitsolutions.com, 

Security Features of  Teradata Database:

Security, as a feature of IT control requirements, defines a trait of information systems, and includes specific policy-based mechanisms and assurances for protecting the confidentiality and integrity of information, the availability of critical services and, indirectly, privacy. Data in a data warehouse must be protected at both ends of a transaction for both user and enterprise.

Data warehouse security requires protection of the database, the server on which it resides, and appropriate network access controls. Teradata highly recommends that customers implement appropriate network perimeter security controls (e.g., firewalls, gateways etc.) to protect network access to a data warehouse. Additionally, for dataware house systems deployed on Windows based operating systems.

If you want to know more on this feature of Teradata and your dream is to build your career in TeraData domain, then several institutes are there which provide Tera Data training. But SRYIT Solutions is providing the best TeraData training. The best feature of our training is, we are providing Teradata online training for the learners.Teradata online training is effective because if its flexible timings and one can learn it form anywhere. On only the Teradata online training, we also provide Teradata corporate training if there is a need.


New Features of Informatica 9 | SRYIT Solutions

There is a great demand for Informatica training because of Informatica 9 has introduced some new features in the current market. This article is going to describe these features of Informatica 9.

New Features of Informatica 9 | SRYIT Solutions
New Features of Informatica 9 | SRYIT Solutions
  • Informatica 9 supports data integration for cloud. One can integrate data in the cloud applications, as well as run this version of Informatica on cloud infrastructure.
  • Informatica 9 introduced a new tool – Informatica analyst.
  • There is difference in the architecture of Informatica 9. This is more effective than the previous architecture.
  • It supports browser based tool for business analysis.
  • It also supports Data steward.
  • It permits unified administration along with a new admin console which enables user to manage power centre and also power exchange from the same console.
  • It has powerful new capabilities for the data quality.
  • It offers single admin console for power centre, data quality, data services and power exchange.
  • In this version of Informatica, IDQ (Informatica data quality) has been integrated with the Informatica Platform, performance, reusability and manageability all are significantly enhanced.
  • The mappings rules are also shared.
  • Both the sql and web services can be used for the real time dash boarding.
  • Informatica data quality offers worldwide address validation support and with integrated geo coding.
  • The capability to define rules and view and execute profiles is available in both the Informatica developer and Informatica analyst also.
  • The developer tool is eclipse based here and supports both data integration and data quality for increasing the productivity.
  • Informatica has the potential to pull data from IMS, DB2 on series and from other legacy systems environment like Datacom, VSAM and IDMS etc.
  • Different tools are available for different roles in Informatica 9.
  • It does not comprise ESB infrastructure.
  • Informatica 9 supports open interfaces.
  • Informatica 9 complements the existing BI architectures by giving immediate access to the data through data virtualization.
  • It supports profiling of Mainframe data.

Here the dashboards are planned for business executives.
We offer Informatica online training to the learners throughout the globe. If you wish to learn more on it you can join our Informatica training. Our Informatica online training is very interactive. All our Informatica trainers are highly qualified and experienced. They are the main strength of our Informatica online training. For more details on Informatica training contact us at +080 08 527566