Category: Data Integration

Merging Two Excel Files or Sheets Using Power Query (with the merge based on Multiple Columns)

Excel Power Query is a powerful, no-code/low-code tool that allows you to combine and transform data from multiple sources in a repeatable and refreshable way. One common use case is merging two Excel files or worksheets based on multiple matching columns, similar to a SQL join. Power Query is a major part of Power BI, but it can be used in Excel.

When to Use Power Query for Merging

Power Query is ideal when:

  • You receive recurring Excel files with the same structure
  • You need a reliable, refreshable merge process
  • You want to avoid complex formulas like VLOOKUP or XLOOKUP across many columns

Step-by-Step Overview

1. Load Both Data Sources into Power Query

  • Open Excel and go to Data → Get Data
  • Choose From Workbook (for separate files) or From Table/Range (for sheets in the same file)

Tip: Ensure the columns you plan to merge on have the same data types (e.g., text vs. number).


  • Load each dataset into Power Query as a separate query

2. Start the Merge Operation

  • In Power Query, select the primary table
  • Go to Query → Merge Queries
  • Choose the secondary table from the dropdown

3. Select Multiple Matching Columns

  • Click the first matching column in the primary table
  • Hold Ctrl (or Cmd on Mac) and select additional matching columns
  • Repeat the same column selections in the secondary table, in the same order

For example, if you needed to perform the merge on CustomerID, OrderDate, and Region, you would click Customer ID, then hold the Ctrl key and click OrderDate, then (while still holding down the Ctrl key) click Region.

Power Query treats this as a composite key, and all selected columns must match for rows from both tables to merge.


4. Choose the Join Type

Select the appropriate join kind:

  • Left Outer – Keep all rows from the first table (most common) and brings in the values for the matching rows from the second table
  • Inner – Keep only matching rows from both tables
  • Full Outer – Keep all rows from both tables, merging the table where there is a match and having just the values from the respective tables when there is no match

Click OK to complete the merge.


5. Expand the Merged Data

  • A new column appears containing nested tables
  • Click the expand icon to select which columns to bring in
  • Remove unnecessary columns to keep the dataset clean

6. Load and Refresh

  • Click Close & Load
  • The merged dataset is now available in Excel
  • When source files change, simply click Refresh to update everything automatically

Key Benefits

  • Handles multi-column joins cleanly and reliably
  • Eliminates fragile lookup formulas
  • Fully refreshable and auditable
  • Scales well as data volume grows

In Summary

Using Power Query to merge Excel data on multiple columns brings database-style joins into Excel, making your workflows more robust, maintainable, and professional. Once set up, it saves time and reduces errors—especially for recurring reporting and analytics tasks.

Thanks for reading!

Power BI load error: load was cancelled by error in loading a previous table

You may run into this error when loading Power BI:

"load was cancelled by error in loading a previous table"

If you do get this error, keep scrolling down to see what the “inducing” error is. This message is an indication that there was an error previous to getting to the current table in the process. The real, initial error will be more descriptive. Start with resolving that error(s), and then this one will go away.

I hope you found this helpful.

Understanding Microsoft Fabric Shortcuts

Microsoft Fabric is a central platform for data and analytics, and one of its powerful features that supports it being an all-in-one platform is Shortcuts. Shortcuts provide a simple way to unify data across multiple locations without duplicating or moving it. This is a big deal because it saves a LOT of time and effort that is usually involved in moving data around.

What Are Shortcuts?

Shortcuts are references (or “pointers”) to data that resides in another storage location. Instead of copying the data into Fabric, a shortcut lets you access and query it as if it were stored locally.

This is especially valuable in today’s data landscape, where data often spans OneLake, Azure Data Lake Storage (ADLS), Amazon S3, or other environments.

Types of Shortcuts

There are 2 types of shortcuts: table shortcuts and file shortcuts

  1. Table Shortcuts
    • Point to existing tables in other Fabric workspaces or external sources.
    • Allow you to query and analyze the table without physically moving it.
  2. File Shortcuts
    • Point to files (e.g., Parquet, CSV, Delta Lake) stored in OneLake or other supported storage systems.
    • Useful for scenarios where files are your system of record, but you want to use them in Fabric experiences like Power BI, Data Engineering, or Data Science.

Benefits of Shortcuts

Shortcuts is a really useful feature, and here are some of its benefits:

  • No Data Duplication: Saves storage costs and avoids data sprawl.
  • Single Source of Truth: Data stays in its original location while being usable across Fabric.
  • Speed and Efficiency: Query and analyze external data in place, without lengthy ETL processes.
  • Flexibility: Works across different storage platforms and Fabric workspaces.

How and Where Shortcuts Can Be Created

  • In OneLake: You can create shortcuts directly in OneLake to link to data from ADLS Gen2, Amazon S3, or other OneLake workspaces.
  • In Fabric Experiences: Whether working in Data Engineering, Data Science, Real-Time Analytics, or Power BI, shortcuts can be created in lakehouses or KQL (Kusto Query Language) databases, and you can use them directly as data in OneLake. Any Fabric service will be able to use them without copying data from the data source.
  • In Workspaces: Shortcuts make it possible to connect across lakehouses stored in different workspaces, breaking down silos within an organization. The shortcuts can be generated from a lakehouse, warehouse, or KQL database.
  • Note that warehouses do not support the creation of shortcuts. However, you can query data stored within other warehouses and lakehouses.

How Shortcuts Can Be Used

  • Cross-Workspace Data Access: Analysts can query data in another team’s workspace without requesting a copy.
  • Data Virtualization: Data scientists can work with files stored in ADLS without having to move them into Fabric.
  • BI and Reporting: Power BI models can use shortcuts to reference external files or tables, enabling consistent reporting without duplication.
  • ETL Simplification: Instead of moving raw files into Fabric, engineers can create shortcuts and build transformations directly on the source.

Common Scenarios

  • A finance team wants to build Power BI reports on data stored by the operations team without moving the data.
  • A data scientist needs access to parquet files in Amazon S3 but prefers to analyze them within Fabric.
  • A company with multiple Fabric workspaces wants to centralize access to shared reference data (like customer or product master data) without replication.

In summary: Microsoft Fabric Shortcuts simplify data access across locations and workspaces. Whether table-based or file-based, they allow organizations to unify data without duplication, streamline analytics, and improve collaboration.

Here is a link to the Microsoft Learn OneLake documentation about Shortcuts. From there you will be able to explore all the Shortcut topics shown in the image below:

Thanks for reading! I hope you found this information useful.

Understanding UNION, INTERSECT, and EXCEPT in Power BI DAX

When working with data in Power BI, it’s common to need to combine, compare, or filter tables based on their rows. DAX provides three powerful table / set functions for this: UNION, INTERSECT, and EXCEPT.

These functions are especially useful in advanced calculations, comparative analysis, and custom table creation in reports. If you have used these functions in SQL, the concepts here will be familiar.

Sample Dataset

We’ll use the following two tables throughout our examples:

Table: Sales_2024

The above table (Sales_2024) was created using the following DAX code utilizing the DATATABLE function (or you could enter the data directly using the Enter Data feature in Power BI):

Table: Sales_2025

The above table (Sales_2025) was created using the following DAX code utilizing the DATATABLE function (or you could enter the data directly using the Enter Data feature in Power BI):

Now that we have our two test tables, we can now use them to explore the 3 table / set functions – Union, Intersect, and Except.

1. UNION – Combine Rows from Multiple Tables

The UNION function returns all rows from both tables, including duplicates. It requires the same number of columns and compatible data types in corresponding columns in the the tables being UNION’ed. The column names do not have to match, but the number of columns and datatypes need to match.

DAX Syntax:

UNION(<Table1>, <Table2>)

For our example, here is the syntax and resulting dataset:

UnionTable = UNION(Sales_2024, Sales_2025)

As you can see, the UNION returns all rows from both tables, including duplicates.

If you were to reverse the order of the tables (in the function call), the result remains the same (as shown below):

To remove duplicates, you can wrap the UNION inside a DISTINCT() function call, as shown below:

2. INTERSECT – Returns Rows Present in Both Tables

The INTERSECT function returns only the rows that appear in both tables (based on exact matches across all columns).

DAX Syntax:

INTERSECT(<Table1>, <Table2>)

For our example, here is the syntax and resulting dataset:

IntersectTable = INTERSECT(Sales_2024, Sales_2025)

Only the rows in Sales_2024 that are also found in Sales_2025 are returned.

If you were to reverse the order of the tables, you would get the following result:

IntersectTableReverse = INTERSECT(Sales_2025, Sales_2024)

In this case, it returns only the rows in Sales_2025 that are also found in Sales_2024. Since the record with “D – West – $180” exists twice in Sales_2025, and also exists in Sales_2024, then both records are returned. So, while it might not be relevant for all datasets, order does matter when using INTERSECT.

3. EXCEPT – Returns Rows in One Table but Not the Other

The EXCEPT function returns rows from the first table that do not exist in the second.

DAX Syntax:

EXCEPT(<Table1>, <Table2>)

For our example, here is the syntax and resulting dataset:

ExceptTable = EXCEPT(Sales_2024, Sales_2025)

Only the rows in Sales_2024 that are not in Sales_2025 are returned.

If you were to reverse the order of the tables, you would get the following result:

ExceptTableReverse = EXCEPT(Sales_2025, Sales_2024)

Only the rows in Sales_2025 that are not in Sales_2024 are returned. Therefore, as you have seen, since it pulls data from the first table that does not exist in the second, order does matter when using EXCEPT.

Comparison table summarizing the 3 functions:

FunctionUNIONINTERSECTEXCEPT
Purpose & OutputReturns all rows from both tablesReturns rows that appear in both tables (i.e., rows that match across all columns in both tables)Returns rows from the first table that do not exist in the second
Match CriteriaColumn position (number of columns) and datatypesColumn position (number of columns) and datatypes and valuesColumn position (number of columns) and datatypes must match and values must not match
Order Sensitivityorder does not matterorder matters if you want duplicates returned when they exist in the first tableorder matters
Duplicate HandlingKeeps duplicates. They can be removed by using DISTINCT()Returns duplicates only if they exist in the first tableReturns duplicates only if they exist in the first table

Additional Notes for your consideration:

  • Column Names: Only the column names from the first table are kept; the second table’s columns must match in count and data type.
  • Performance: On large datasets, these functions can be expensive, so you should consider filtering the data before using them.
  • Case Sensitivity: String comparisons are generally case-insensitive in DAX.
  • Real-World Use Cases:
    • UNION – Combining a historical dataset and a current dataset for analysis.
    • INTERSECT – Finding products sold in both years.
    • EXCEPT – Identifying products discontinued or newly introduced.

Thanks for reading!

Data Cleaning methods

Data cleaning is an essential step in the data preprocessing pipeline when preparing data for analytics or data science. It involves identifying and correcting or removing errors, inconsistencies, and inaccuracies in the dataset to improve its quality and reliability. It is essential that data is cleaned before being used in analyses, reporting, development or integration. Here are some common data cleaning methods:

Handling missing values:

  • Delete rows or columns with a high percentage of missing values if they don’t contribute significantly to the analysis.
  • Impute missing values by replacing them with a statistical measure such as mean, median, mode, or using more advanced techniques like regression imputation or k-nearest neighbors imputation.

Handling categorical variables:

  • Encode categorical variables into numerical representations using techniques like one-hot encoding, label encoding, or target encoding.

Removing duplicates:

  • Identify and remove duplicate records based on one or more key variables.
  • Be cautious when removing duplicates, as sometimes duplicated entries may be valid and intentional.

Handling outliers:

  • Identify outliers using statistical methods like z-scores, box plots, or domain knowledge.
  • Decide whether to remove outliers or transform them based on the nature of the data and the analysis goals.

Correcting inconsistent data:

  • Standardize data formats: Convert data into a consistent format (e.g., converting dates to a specific format).
  • Resolve inconsistencies: Identify and correct inconsistent values (e.g., correcting misspelled words, merging similar categories).

Dealing with irrelevant or redundant features:

  • Remove irrelevant features that do not contribute to the analysis or prediction task.
  • Identify and handle redundant features that provide similar information to avoid multicollinearity issues.

Data normalization or scaling:

  • Normalize numerical features to a common scale (e.g., min-max scaling or z-score normalization) to prevent certain features from dominating the analysis due to their larger magnitudes.

Data integrity issues:

Finally, you need to address data integrity issues.

  • Check for data integrity problems such as inconsistent data types, incorrect data ranges, or violations of business rules.
  • Resolve integrity issues by correcting or removing problematic data.

It’s important to note that the specific data cleaning methods that need to be applied to a dataset will vary depending on the nature of the dataset, the analysis goals, and domain knowledge. It’s recommended to thoroughly understand the data and consult with domain experts when preparing to perform data cleaning tasks.

What is data analytics? And what are the different types of data analytics?

Data analytics is the overall process of capturing and using data to produce meaningful information, including metrics and trends, that can be used to better understand events and help make better decisions. Usually the goal is to improve the efficiency and outcomes of an operation, such as a business, a political campaign, or even an individual (such as an athlete). There are four (4) prevalent types of data analytics – descriptive, predictive, diagnostic, and prescriptive.

  1. Descriptive analytics – provides information about “what has happened”. Examples of questions answered by descriptive analytics include: How much are our sales this month and what is over year-over-year sales increase? How many website visitors did we have and how many signups?
  2. Predictive analytics – provides insight into “what may happen” in the future based on the past. Examples of questions answered by predictive analytics include: Based on previous customer service call patterns and outcomes, what is the likelihood of a customer switching to another provider? Based on a customer’s profile, how much should we charge him for insurance?
  3. Diagnostic analytics – provides information to explain “why something happened”. In addition to the direct data, this may also involve more indirect or macro data sources, such as, weather data, local or national economic data, or competitor data. And it may also involve forming logical theories about the correlation of events. Examples of questions answered by diagnostic analytics include: How effective was the marketing blitz and which channel had the most impact? Did the weather affect sales or was it the price increase?
  4. Prescriptive analytics – provides insight into “what to do to make something happen”. Examples of questions answered by prescriptive analytics include: Based on the results of our test marketing blitz campaign, if we roll out the full campaign with adjustments to the channel spread, how many additional temporary customer service staff will we need to handle the increased volume without long wait times?
The four (4) types of data analytics

Descriptive analytics is the simplest and most common form of analytics used in organizations and is widely referred to as Business Intelligence (BI). There is widespread interest in predictive analytics but less than 50% of companies currently use it as it requires additional, more expensive skills. Diagnostic and prescriptive analytics have always been around because companies have always used information from descriptive analytics to hypothesize “why things happened” and make decisions on “what to do”. But it’s the automation of these types through new methods and the integration of more data inputs that is fairly new. The latter three forms are sometimes called Advanced Analytics or Data Science.

All the types of analytics will require some form of data integration and use some of the same data in an environment, but while descriptive analytics only needs data from the time periods being analyzed and usually from a narrower data set, the predictive, prescriptive and diagnostic analytics produce better results using as much data as is available from a wider timeframe and from a broader set of sources. There is overlap with the different types of analytics because the analysis of “what may happen” is driven by “what has happened” in the past and “why it happened”; and determining “what to do” will be driven by “what has happened”, “why it happened”, and “what may happen”. Companies on the forefront of data analytics will tend to use all four types.

BI Application getting ORA-02391 error

Last week we rolled out a new dashboard that uses a new data source.
In one of our BI environments, the application was throwing an error:
“ORA-02391: exceeded simultaneous SESSIONS_PER_USER limit at OCI call OCISessionBegin

This is an Oracle Database error, and not an error directly from the BI Application.

For the “ORA-02391: exceeded simultaneous SESSIONS_PER_USER limit” error …
The Cause is:   An attempt was made to exceed the maximum number of concurrent sessions allowed by the SESSIONS_PER_USER clause of the user profile.
And the Action for resolution is:   End one or more concurrent sessions or ask the database administrator to increase the SESSIONS_PER_USER limit of the user profile.

Turns out the SESSIONS_PER_USER parameter was set too low; it was set to 3 for the user being used to access the database from the BI application. This error could have also been observed from an ETL tool accessing the database with an ID with the same parameter setting.

One of the DBAs bumped this parameter up to 30 for the user, and that resolved the issue.
We requested for this change to be done on the BI application databases in all the environments – Development, Test, QA, and Production.

Although all seems to be well, we will now monitor to see how many sessions the application is using and if there is any negative impact on the source application. This will allow us to determine if we need to make any other adjustments.

Thanks for reading. I hope you found this information useful.

Quality Assurance (QA) for Data Projects or Data Applications

This post discusses Quality Assurance (QA) activities for data projects.

What is Quality Assurance (QA)?  Simply put, Quality Assurance, also called QA, Testing or Validation, is about testing an application or solution to ensure that all the stated/promised/expected requirements are met. It is a critically important activity for all software application development or implementations. Data applications are no different. They need to be tested to ensure they work as intended.

QA stands between development and deployment. And QA makes the difference between a delivered product and a high quality delivered product.

There are a number of things to keep in mind when you plan your Quality Assurance activities for data solutions. I present some of them in this post as suggestions, considerations, or prompting questions. The things mentioned here will not apply to all data applications but can be used as a guide or a check.

People / Teams

The number of people and teams involved in a project will vary depending on the size, scope and complexity of the project.

The technical team building the application needs to perform an initial level of validation of the solution.

If there is a Quality Assurance team that performs the validation tasks, then that team will need to perform the “official” validation.

The business analysts and end-users of the application also need to validate. Where possible, work with as many end users as efficiently possible. The more real users you have testing the application, the better the chances of finding issues early.

Where it makes sense, Test IDs that simulate various types of users or groups should be used to help test various usage and security scenarios. This is particularly useful in automated testing.

On large projects where there is a lot to be tested, it is best to break up the testing across multiple people or teams. This will help to prevent testing fatigue and sloppy testing and result in higher quality testing.

Plan ahead to ensure that access for all the relevant users is set up in the testing environments.

Communication

With all the teams and people involved, it is important to have a plan for how they will communicate. Things to consider and have a plan for include:

  • How will teams communicate within? Email, Microsoft Teams, SharePoint, Shared Files, are some options.
  • How will the various teams involved communicate with each other? In other words, how will cross-team communication be handled? As above, Email, Microsoft Teams, SharePoint, Shared Files, are some options.
  • How will issues and status be communicated? Weekly meetings, Status emails or documents, Shared files available on shared spaces are options.
  • How will changes and resolutions be tracked? Files, SDLC applications, Change Management applications are options.
  • How will teams and individuals be notified when they need to perform a task? Manual communication or automated notifications from tools are options.

Data

The most important thing to ensure in data projects is that the data is high quality, particularly the “base” data set. If the base data is incorrect, everything built on top of it will be bad. Of course, the correctness of intermediate and user-facing data is also just as important, but the validation of the base data is critical to achieving the correct data all over.

  • Ensure that table counts, field counts and row counts of key data are correct.
  • Does the data warehouse data match the source data?
  • Test detailed, low level records with small samples of data
  • Test to ensure that the data and the values conform to what is expected. For example, ensuring that there is no data older than 3 years old, or ensuring that there are no account values outside a certain range. The Data Governance Team may become involved in these activities across all projects.

Next in line is the “intermediate” data such as derived metrics, aggregates, specialized subsets, and more. These will also need to be verified.

  • Are the calculated values correct?
  • Are the aggregates correct? Test aggregate data with small, medium and large sets of data
  • Verify metric calculations

Then the user-facing data or data prepared for self-service usage needs to be validated.

  • Does the data on the dashboard match the data in the database?
  • Are the KPIs correctly reflecting the status?

Test the full flow of the data. The validity of the data should be verified at each stage of the data flow – from the source, to the staging, to the final tables in the data warehouse, to aggregates or subsets, to the dashboard.

Take snapshots of key datasets or reports so you can compare results post data migration.

Some additional data prep might be needed in some cases.

  • These include making sure that you have sourced adequate data for testing. For example, if you need to test an annual trend, then it might be best to have at least a year’s worth of data, preferably two.
  • You may need to scramble or redact some data for testing. Often Test data is taken from the Production environment and then scrambled and/or redacted in order to not expose sensitive information.
  • You may need to temporarily load in data for testing. For various reasons, you may need to load some Production data into the QA environment just to test the solution or a particular feature and then remove the data after the testing is complete. While this can be time consuming, sometimes it’s necessary, and it’s good to be aware of the need early and make plans accordingly.

Aesthetics & Representation of Data

Presentation matters. Although the most critical thing is data correctness, how the data is presented is also very important. Good presentation helps with understanding, usability, and adoption. A few things to consider include:

  • Does the application, such as dashboard, look good?  Does it look right? 
  • Are the components laid out properly so that there is no overcrowding?
  • Are the logos, colors and fonts in line with company expectations?
  • Are proper chart options used to display the various types of data and metrics?
  • Is the information provided in a way that users can digest?

Usage

The data application or solution should be user friendly, preferably intuitive or at least have good documentation. The data must be useful to the intended audience, in that, it should help them to understand the information and make good decisions or take sensible actions based on it.

The application should present data in a manner that is effective – easy to access, and easy to understand.

The presentation should satisfy the analytic workflows of the various users. Users should be able to logically step through the application to find information at the appropriate level of detail that they need based on their role.

A few things that affect usability include:

  • Prompts – ensure that all the proper prompts or selections are available to users to slice and filter the data as necessary. And of course, verify that they work.
  • Drill downs and drill throughs – validate that users can drill-down and across data to find the information they need in a simple, logical manner.
  • Easy interrogation of the data – if the application is ad-hoc in nature, validate that users can navigate it or at least verify that the documentation is comprehensive enough for users to follow.

Security

Securing the application and its data so that only authorized users have access to it is critical.

Application security comprises of “authentication”– access to the application, and “authorization” – what a user is authorized to do when he or she accesses the application.

Authorization (what a user is authorized to do within the application) can be broken into “object security” – what objects or features a user has access to, and “data security” – what data elements a user has access to within the various objects or features.

For example, a user has access to an application (authenticated / can log in), and within the application the user has access to (authorized to see and use) 3 of 10 reports (object-level security). The user is not authorized to see the other 7 reports (object-level security) and, therefore, will not have access to them. Now, within the 3 reports that the user has access to, he or she can only see data related to 1 of 5 departments (data-level security).

All object-level and data-level security needs to be validated. This includes negative testing. Not only test to make sure that users have the access they need, but testing should also ensure that users do not have access that they should not have.

  • Data for testing should be scrambled or redacted as appropriate to protect it.
  • Some extremely sensitive data may need to be filtered out entirely.
  • Can all the appropriate users access the application?
  • Are non-authorized users blocked from accessing the application?
  • Can user see the data they should be able to see to perform their jobs?

Performance

Performance of the data solution is important to user efficiency and user adoption. If users cannot get the results they need in a timely manner, they will look elsewhere to get what they need. Even if they have no choice, a poorly performing application will result in wasted time and dollars.

A few things to consider for ensuring quality around performance:

  • Application usage – is the performance acceptable? Do the results get returned in an acceptable time?
  • Data Integration – is the load performance acceptable?
  • Data processing – can the application perform all the processing it needs to do in a reasonable amount of time?
  • Stress Testing – how is performance with many users? How is it with a lot data?
  • How is performance with various selections or with no selections at all?
  • Is ad-hoc usage setup to be flexible but avoid rogue analyses that may cripple the system?
  • Is real-time analysis needed and is the application quick enough?

These items need to be validated and any issues need to be reported to the appropriate teams for performance tuning before the application is released for general usage.

Methodology

Each organization, and even each team within an organization, will have a preferred methodology for application development and change management, including how they perform QA activities.

Some things to consider include:

  • Get QA resources involved in projects early so that they gain an early understanding of the requirements and the solutions to assess and plan how best to test.
  • When appropriate, do not wait until all testing is complete before notifying development teams of issue discovered. By notifying them early, this could make the difference between your project being on-time or late.
  • Create a test plan and test scripts – even if they are high-level.
  • Where possible, execute tasks in an agile, iterative manner.
  • Each environment will have unique rules and guidelines that need to be validated. For example, your application may have a special naming convention, color & font guidelines, special metadata items, and more. You need to validate that these rules and guidelines are followed.
  • Use a checklist to ensure that you validate with consistency from deliverable to deliverable
  • When the solution being developed is replacing an existing system or dataset, use the new and old solutions in parallel to validate the new against the old.
  • Document test results. All testing participants should document what has been tested and the results. This may be as simple as a checkmark or a “Done” status, but may also include things like data entered, screenshots, results, errors, and more.
  • Update the appropriate tracking tools (such as your SDLC or Change Management tools) to document changes and validation. These tools will vary from company to company, but it is best to have a trail of the development, testing, and release to production.
  • For each company and application, there will a specific, unique set of things that will need to be done. It is best if you have a standard test plan or test checklist to help you confirm that you have tested all important aspects and scenarios of the application.

This is not an all-encompassing coverage of Quality Assurance for data solutions, but I hope the article gives you enough information to get started or tips for improving what you currently have in place. You can share your questions, thoughts and input via comments to this post. Thanks for reading!

Learning Hadoop: The key features and benefits of Hadoop

What are the key features and benefits of Hadoop? Why is Hadoop such a successful platform?

Apache Hadoop, mostly called just Hadoop, is a software framework and platform for reading, processing, storing and analyzing very large amounts of data. There are several features of Hadoop that make it a very powerful solution for data analytics.

Hadoop is Distributed

With Hadoop, from a few to hundreds or thousands of commodity servers (called nodes) can be connected (forming a cluster) to work together to achieve whatever processing power and storage capability is needed. The software platform enables the nodes to work together, passing work and data between them. Data and processing is distributed across nodes which spreads the load and significantly reduces the impact of failure.

Hadoop is Scalable

In the past, to achieve extremely powerful computing, a company would have to buy very expensive, large, monolithic computers. As data growth exploded, eventually even those super computers would become insufficient. With Hadoop, from a few to hundreds or thousands or even millions of commodity servers can be relatively easily connected to work together to achieve whatever processing power and storage capability is needed. This allows a company or project to start out small and then grow as needed inexpensively, without any concern about hitting a limitation.

Hadoop is Fault Tolerant

Hadoop was designed and built around the fact that there will be frequent failures on the commodity hardware servers that make up the Hadoop cluster. When a failure occurs, the software handles the automatic reassignment of work and replication of data to other nodes in the cluster, and the system continues to function properly without manual intervention. When a node recovers, from a reboot for example, it will rejoin the cluster automatically and become available for work.

Hadoop is backed by the power of Open Source

Hadoop is open source software, which means that it can be downloaded, installed, used and even modified for free. It is managed by the renown non-profit group, Apache Software Foundation (ASF), hence the name Apache Hadoop. The group is made up of many brilliant people from all over the world, many of whom work at some of the top technology companies, who commit their time to managing the software. In addition, there are also many developers that contribute code to enhance or add new features and functionality to Hadoop or to add new tools that work with Hadoop. The various tools that have been built over the years to complement core Hadoop make up what is called the Hadoop ecosystem. With a large community of people from all over the world continuously adding to the growth of the Hadoop ecosystem in a well-managed way, it will only get better and become more useful to many more use-cases.

These are the reasons Hadoop has become such a force within the data world. Although there is some hype around the big data phenomenon, the benefits and solutions based on the Hadoop ecosystem are real.

You can learn more at https://hadoop.apache.org

InfatoODI – Informatica to ODI conversion tool

We are currently in the process of upgrading Oracle Business Intelligence Applications (OBIA) from version 7.9.6 to OBIA 11g.  Oracle has replaced Informatica as the data integration tool in the platform with it’s own tool, Oracle Data Integrator (ODI). This was a selfish, profit-driven move on Oracle’s part with no consideration for the impact on customers, but it is what it is.

Because of this, as a part of the upgrade to the new OBIA release, we need to convert all our hundreds of Informatica mappings to ODI.  As you can imagine, this is a lot of work.  We are getting help from a company that has developed a specialized conversion tool called InfatoODI, which converts Informatica mappings to ODI interfaces.

We are performing the conversions specifically for an OBIA application, but the tool can be used as a straight conversion tool for Informatica-to-ODI for any type of application.

We are in the beginning stages of the project, but early indications are that the tool will save us time, but I am not sure how significant as yet. I will post updates as we progress through the conversions with my experience and opinion of the tool.