Category Archives: Business Intelligence

IBM Cognos Statistics error RSV-CCP-0005 OXML not found


IBM Cognos Statistics error RSV-CCP-0005 OXML not found

The Problem

When installing Cognos Statistics, the installation may go smoothly except when trying to run a report, such as the included sample reports, you get an error RSV-CCP-0005 ‘OXML not found’.

OXML is the output file from SPSS, the statistics software acquired by IBM and integrated into a product that adds Statistics processing and graphing on to IBM Cognos Business Intelligence.  It turns out that the root cause of this error is that IBM Cognos Statistics cannot be installed in a directory with a space in the directory path.  See this IBM note: Space in the install path causes RSV-CCP-0005 error when opening or running Statistics reports.

This is unfortunate, because the overwhelming majority of the time people install Cognos BI under ‘Program Files’  or ‘Program Files (x86)’ and Cognos Statistics can naturally be installed in the same directory.

The Solution

Install Cognos Statistics in a directory with no spaces in the path, say, “c:\IBMCognosStatistics”.  Now, there will be a separate service and it needs a different port address than the IBM Cognos BI Server.  When you run Cognos Configuration for Cognos Statistics (assuming it’s the same server), use the same URL’s, except change the port number.  For example, instead of 9300, use 9310.  Do this for the dispatchers, the logging port and the shutdown port.  Only leave the Content Manager port the same as Cognos.  This is because it will communicate with Cognos BI Server through the same Content Store.

Be sure Cognos BI is up and running before setting this configuration because it will need the Content Manager service for encryption and registering the new service.  When you save and start, there will be a new service under Windows Services for Statistics.  Ideally, you will start IBM Cognos service before the IBM Cognos Statistics service and conversely, shut down the Statistics service before shutting down the Cognos BI service.

Comments

Anecdotally, this error began with version 10.1, but I haven’t confirmed this.  It seems the kind of problem that should be fixed in future releases.  Maintaining a separate install outside of the standard area is undesirable.

In a green field situation where you are starting from an empty server, you might consider installing Cognos BI and Cognos Statistics in the same directory with no spaces in the path.  In this case, there will be only one configuration and you won’t need separate ports or a separate service.

MSC-ERR-0008 An error has occured at the transport layer and OP-ERR-0181 At least one invalid member – Cognos and SSAS errors


MSC-ERR-0008 An error has occurred at the transport layer and OP-ERR-0181 At least one invalid member – Cognos and SSAS errors

A client has found a solution to two errors occurring in an environment with Cognos 10.1 and SQL Server 2008.

  • MSC-ERR-008 An error as occurred at the transport layer.
  • OP-ERR-0181 At least one invalid member reference was encountered in the query.

A hot-fix was applied to Windows as described in these two following articles and initial testing has been positive. The scenario is a single sign-on Cognos environment with Active Directory using Kerberos to authenticate to SQL Server Analysis Services 2008. The errors seemed intermittent.

http://blogs.msdn.com/b/psssql/archive/2009/04/03/errors-may-occur-after-configuring-analysis-services-to-use-kerberos-authentication-on-advanced-encryption-standard-aware-operating-systems.aspx

http://support.microsoft.com/kb/969083

IBM Cognos BI report error against SSAS – At least one invalid member reference was encountered in the query


At least one invalid member reference was encountered in the query

Problem

When encountering this error message “At least one invalid member reference was encountered in the query…”, when reporting against SQL Server Analysis Services with IBM Cognos Report Studio 10.1 using a crosstab, the problem may be caused by having no data in a row and column intersection.

Solution

A potential solution is to go to the query properties and, under Query Hints, change Suppression to Nulls.  This can also improve performance as it can reduce the data returned from the cube.  Note that this is a different setting than the Zero Suppression that suppress rows and/or columns.  It’s on the query, not the reporting object.

There could be many reasons that a missing member reference was encountered, but this simple change often fixes the problem.

Errors in the OLAP storage engine: A duplicate attribute key has been found when processing


Duplicate attribute key error

A frequent error when processing an SSAS cube begins “Errors in the OLAP storage engine: A duplicate attribute key has been found when processing”. I have found that this is usually one of two conditions:

Typical Causes

  1. An actual duplicate key is found where a value for a child in a hierarchy occurs with two different parent values.
  2. A value for a data item is NULL.

Solution for actual duplicate key

The former is easier to discover and can be resolved by using a compound key for the parent that includes the child key. Look for it when the field in question is part of a hierarchy.  A child can only have one parent.

Solution for null data

The latter can be confounding because the error message is often misleading. It can show a value in the error message, not indicating that the real underlying cause is the NULL value. This can be repaired by using a COALESCE statement to replace a NULL with a value such as ‘Unknown’ or simply ”.

A good practice is to use views as the source of data to a cube, rather than making any modifications within the SSAS data source view. It is in the view that I add the COALESCE function around the data item.

For example, if there are nulls in the MIDDLE_NAME data item, the view

CREATE VIEW vwPeople AS

SELECT FIRST_NAME [First Name],
COALESCE(MIDDLE_NAME,”) [Middle Name],
LAST_NAME [Last Name]
FROM PEOPLE

will use the MIDDLE_NAME value for [Middle Name] unless it is NULL and then will use the empty string instead.

Microsoft Business Intelligence Conference 2010


I’m getting excited about the Microsoft BI Conference coming up in New Orléans.  I scheduled all of my sessions with a heavy emphasis on very technical deep dives into PowerPivot and PerformancePoint Services.  I am also looking forward to learning more about Master Data Services.

Here’s a link: http://bi2010.eventpoint.com/default.aspx

I hope to see you there.  Tweet me.

When to use an OLAP cube?


There are three reasons to use an OLAP cube in addition to your relational database – performance, drill-down functionality and availability of software tools.

Online analytical processing (OLAP) is a technique for quickly analyzing a measure, e.g. profit margin, by multiple categories or dimensions, e.g. customer, region, fiscal period and product line.  Typically the end user software has capabilities to drag categories to rows and columns and aggregate the measure at each intersection of a row and column (often called a cross tab report).  This is similar to the familiar spreadsheet format.  This numeric format can usually also be represented in the form of a chart or graph.  The real power of OLAP is the ability to drill down on a category to see more details.  For example, you might drill down on a state to see details by city.

An OLAP cube is a technology that stores data in an optimized way to provide quick response to queries by dimension and measure.  Most cubes pre-aggregate the measures by the different levels of categories in the dimensions to enable the quick response time.  End user software will make querying a cube very easy, but developers, who may be accustomed to using SQL language, will need to learn a new language – MDX (Multi-Dimensional eXpressions).

The standard design for a relational database source for this analysis is called a star schema.  A fact table is related to multiple dimensions and this can be represented graphically in a form of a star.  A star schema design will support reporting and analysis by dimensions for measures in the cross tab and graphical formats without using an OLAP cube.  Why would you go to the time, expense, disk space, skill development and increased maintenance to also build a cube when a relational database will support this analysis?

There are three reasons for adding a cube to your solution:

  1. Performance.  A cube’s structure and pre-aggregation allows it to provide very fast responses to queries that would have required reading, grouping and summarizing millions of rows of relational star-schema data.  The drilling and slicing and dicing that an analyst would want to perform to explore the data would be immediate using a cube but could have lengthy pauses when using a relational data source.
  2. Drill down functionality.  Many reporting software tools will automatically allow drilling up and down on dimensions with the data source is an OLAP cube.  Some tools, like IBM Cognos’ Dimensionally Modeled Relational model will allow you to use their product on a relational source and drill down as if it were OLAP but you would not have the performance gains you would enjoy from a cube.
  3. Availability of software tools.  Some client software reporting tools will only use an OLAP data source for reporting.  These tools are designed for multi-dimensional analysis and use MDX behind the scenes to query the data.

OLAP cube technology will cost more in terms of development, learning and project time but will return benefits in fast response time to analyze large amounts of data.  This capability can result in insights that drive actions and decisions that enable very large organizational productivity, cost saving or revenue increasing gains.

Which OLAP cube should I use, Microsoft SSAS or IBM Cognos PowerCube (Powerplay)?


An Online Analytical Processing (OLAP) cube is a powerful tool to analyze large amounts of data quickly. There are a number of products available that can create an OLAP cube. This article will discuss some of the factors that should be considered in choosing an OLAP technology and evaluate two major vendors’ products – Microsoft SQL Server Analysis Services 2008 (SSAS) and IBM Cognos PowerCube.
Three factors that are important in considering OLAP technologies are:
1. Your existing database and reporting technologies.
2. The amount of data that will be brought into the cube.
3. The skill set of workers that will design and create cubes.

Microsoft SQL Server

If your data warehouse platform is based on Microsoft SQL Server, especially if you are also using SQL Server Integration Services (SSIS), then SSAS is a natural extension of the architecture. SSIS can used to load and process the SSAS cube. SQL Server data will be optimized for loading into the SSAS cube. (It is important to note that SSAS can also source from Oracle, DB2, Access and Teradata data sources.) This Microsoft database environment can also be used as source data for an IBM Cognos cube, but the Cognos cube cannot be used with Microsoft reporting tools such as Reporting Services. IBM Cognos does have a software product that allows Powercube browsing with Excel; Microsoft SSAS cubes can be browsed natively with Excel pivot tables.

SharePoint
If your portal and collaboration solution is Microsoft SharePoint there, are exciting features in SharePoint 2010 that combines PerformancePoint and new PowerPivot software. These will leverage SSAS cubes and will not support IBM Cognos Powercubes.

IBM Cognos environment
If your database platform is not Microsoft and your reporting environment is purely IBM Cognos Business Intelligence, then using Powercubes is a natural extension. Analysis Services cubes may also be used in this environment if other considerations make this desirable, but all things being equal, Cognos is a better fit.

Size of data
The size of the source data and resulting cube is of paramount importance in your choice. Cognos Powercubes have an inherent limit of 2 Gb, although there are workaround techniques. Microsoft SSAS cubes are commonly 300-400 Gb in size with source data measured in terabytes. Multi-terabyte SSAS cubes are in use today. SSAS also gives the ability to use relational tables for aggregation (known as ROLAP) or a hybrid (known as HOLAP). This allows for even more scalability. For large amounts of data, Microsoft is a clear winner.

Learning Curve
Microsoft SSAS requires a more technical skill set for developers than IBM Cognos Powercubes. Microsoft cubes will require a working knowledge of the multi-dimensional language, MDX. For developers that know SQL it will look similar, but is a paradigm shift analogous to moving from procedural languages to object oriented languages. There will be a learning curve. The return for that investment is much more flexibility, programmability and extensibility.
IBM Cognos Transformer, the software used to design a Powercube, was designed with the developer or power user in mind. IBM envisions strong financial analysts creating their own cubes. The result is simpler and easier to use but lacks the rich capabilities present in SSAS. Organizations with a medium amount of data and limited technical resources can build solutions quicker using IBM Cognos cubes.

Conclusion
In conclusion, Microsoft SQL Server Analysis Services will provide more scalability, extensibility and functionality with a cost of complexity and learning curve. This makes it a more strategic choice and IBM Cognos Powercubes a tactical choice in the appropriate situations. Powercubes are best used with smaller sets of data in an existing Cognos environment and when a less advanced technical skill set is required by the designers.

IBM Cognos BI 8.4.1 release supports SQL Server Analysis Services 2008


IBM has released Cognos BI 8.4.1 and they claim that it finally supports SSAS 2008. I will be upgrading a client in early January 2010 and put this to the test. Prior to 8.4.1 there was a way to access an SSAS 2008 cube but it would not pass the user credentials with Kerberos as it does with SSAS 2005. I will be testing the new release for integrated security in particular.

Problem using an SSAS data source in IBM Cognos Framework manager


When trying to use a Microsoft SSAS OLAP cube data source in Framework Manager you get this error message:

MO-ERR-0028 The user starting the process doesn’t have the correct privileges

The solution is as follows:

  1. Use a network domain account for the Cognos service
  2. On Cognos server, in Local Security Settings under Local Policies/User Rights Assignment add the cognos service account to “Act as part of the operating system”
  3. ON SQL Server give cognos service security access to cube
  4. For the SSAS Data Source in Cognos Connection use IBM Cognos 8 service credentials for Signon.

Enterprise data warehouse or star schema? (aka Inmon vs Kimball)


In my business intelligence consulting experience the question often arises whether it is better to build an enterprise data warehouse or to design a star schema database with fact tables sharing conformed dimensions. In an enterprise data warehouse or corporate information factory the organization is modeled in a central data model and most respected methodologists recommend a third normal form data model. The star schema approach is optimized for fast queries and advocates building fact tables joined to dimensions with dimensions being shared among fact tables. The dimensions use unique integer primary keys which are referenced by the fact tables as foreign keys. The star schemas are often called data marts connoting that a mart is smaller than a warehouse. The early thought leaders for these concepts are Bill Inmon for the enterprise data warehouse and corporate information factory and Ralph Kimball for the dimensional star schema architecture.

 

Now, in my opinion, a methodology is not a search for Truth; it should be evaluated on its effectiveness in allowing an organization to achieve its objectives, given its constraints and its current context. (That’s a long way of saying “it depends”.) If time and money were unlimited, the hybrid approach of building an enterprise model database and projecting to various business process oriented data marts and cubes would almost always be the most effective design. If time and money are very constrained, then loading data directly into a star schema with a companion cube would an efficient design to deliver analysis capability for a specific subject area.

 

The enterprise data model allows flexibility in modeling an organization’s data in a way that matches the organization, rather than trying to map the organization’s processes into a fact/dimension mold. The operational systems primarily model the organization at point in time — now. The enterprise data warehouse can be updated at difference frequencies from separate operational data sources and would model the organization over time and allow retrieving information about a point in time or a range of time. From this central data store, star schema databases can be loaded at regular frequencies. These data marts are modeled with specific functional purposes in mind and optimized for query analysis speed. Since these data marts can be created directly, without the central data store, the incremental cost of creating the enterprise data warehouse must be evaluated. There is an up front development cost and ongoing operational costs. The initial cost includes additional design, ETL development and disk space. The ongoing costs include the additional processing time and storage growth that the redundant data store requires. The return on investment must be shown by faster data mart building and loading, better quality information, and the ability to deliver future solutions that would have been difficult to achieve if a star schema design in the past proved to be inaccurate. The chief benefit is flexibility in architecture.

 

A good strategy to achieve a hybrid method is to build the business intelligence system one business process at a time. Design the enterprise model subset and dimensional model together and structure the ETL processes to pass the data through efficiently. With each subsequent project, refine the enterprise model and reuse conformed dimensions in the data mart when adding new fact and dimension tables. The enterprise data warehouse grows incrementally with immediate benefits of data mart reporting.

 

The star schema dimensional model with multidimensional cubes is the most efficient structure in current technology, both in terms of processing speed and end user report writing speed, although new in-memory column-oriented technology is beginning to look attractive. An enterprise data warehouse model can be the most accurate method to describe an organization over time. Combining the two now will allow very powerful analysis today with the option of using a more effective delivery technology in the future. An individual organization must decide how to deploy its resources now and prioritize the value of a data warehouse versus a decision support analysis tool. I recommend always using star schemas and multidimensional cubes today and combine with an enterprise data model when the information strategy justifies more cost for a better foundation for the future.