Dark logo

Extract Transform Load (ETL)

Published February 27, 2017
Doug Rose
Author | Agility | Artificial Intelligence | Data Ethics

To analyze a body of data, that data must first be loaded into a data warehouse; that is, it must be copied from one or more systems, converted into a uniform format, and written to the new destination. This process is commonly referred to as extract, transform, load (ETL). ETL provides the means to combine disparate data from multiple sources and create a homogenous data set that can be analyzed in order to extract business intelligence from it.

Extract

During extraction, data is read from one or more sources and held in temporary storage for transformation and loading. An organization may extract data from its own internal systems, such as a transaction processing system that records all order activities or from external sources, such as data it purchases or obtains for free from other organizations.

Extraction is commonly broken down into two logical extractions methods:

  • Full: All data currently available in the source system is extracted.
  • Incremental: Only data that has been added to the source system after a full extraction is extracted. The purpose of incremental extractions is to keep the data in the data warehouse up to date.

Extraction is also broken down into two physical extraction methods:

  • Online: Data is extracted directly from the source system; for example, streaming data from websites or online games.
  • Offline: Data is extracted from files created by the source system; for example, via a save or export operation.

Transform

During the transform stage, data is processed to make all data consistent in structure and format so that it all conforms to a uniform schema. A schema provides the structure and rules for organizing data in a relational database. The source and target database systems may use different schemas; for example, the source database may store shipping information in a Customer table, whereas the target database stores shipping information in a separate Shipping table. Or, the source table may have dates in the MM/DD/YYYY format, whereas the target uses the DD/MM/YYYY format. To successfully copy data from the source to the target, certain transformations must be made to ensure that the source data is in an acceptable format.

Transformations can be handled in two ways:

  • Multistage: The transformation process is broken down into distinct steps, each of which results in a separate staging table prior to data being inserted into the target table.
  • Pipelined: Staging tables are eliminated to streamline the process in the event that the target database is a more integral part of the ETL operation.

Load

During the load operation, all newly transformed data is written to the target data warehouse for storage. Various mechanisms can be used to load data into the target warehouse, including the following:

  • Structured query language (SQL)
  • External tables
  • OnCommand Insight (OCI) databases and Direct-Path application programming interfaces (APIs)
  • Export/import features of the source and target databases

Variations on the Theme

ETL is commonly described as a three-step process primarily to make it easier to understand. In practice, ETL is not a series of clearly defined steps but more of a single process. As such, the sequence of events may vary. Depending on the approach, ETL may be more like one of the following:

  • Extract, load, and transform (ELT): Data is loaded into the target warehouse and then transformed. ELT is likely to increase performance when the target data warehouse has greater compute capacity than would otherwise be available for the transformation engine required for an ETL operation.
  • Extract and then transform while loading: With a pipelined approach to transforming data, the transformation and loading steps are often combined.

The ETL Bottleneck

Given the increasing volumes of data that organizations must capture and integrate into their data warehouses, Extract Transform Load often becomes a major bottleneck. Database administrators need to constantly revise their ETL procedures to accommodate variations in the data arriving from different sources. In addition, the volume and velocity of data can overwhelm an organization’s existing data warehouse storage and compute capabilities, leading to delays in producing time-sensitive reports and business intelligence. ETL operations often compete for the same storage and compute resources needed to handle data queries and analytics.

Fortunately, data warehousing technology has evolved to help reduce or eliminate the impact of the ETL bottleneck. For example, cloud data warehousing provides virtually unlimited storage and compute resources, so that ETL does not need to compete with queries and analytics for limited resources. In addition, data warehouse frameworks such as Hadoop take advantage of distributed, parallel processing to distribute work-intensive tasks such as ETL over multiple servers to complete jobs faster.

With the right tools and technologies in place, organizations can now stream diverse data from multiple sources into their data warehouses and query and analyze that data in near real time. If you or your team is in charge of procuring a new data warehouse solution for your organization, look for a solution that provides unlimited concurrency, storage, and compute, to avoid contention issues between ETL processes and people in the organization who need to use the same system to run queries and conduct analysis. Also look for a system that can live-stream data feeds and process structured, semi-structured, and unstructured data quickly and easily without complicated and costly ETL or ELT processes. In most cases, the ideal solution will be data warehouse built for the cloud.

Related Posts
October 9, 2017
Data Storytelling Audience

To tell a story with data you need to communicate with your data storytelling audience. Not just with visualization, but with a good narrative.

Read More
October 23, 2017
Make Data Storytelling Pop!

Data storytelling is just about data visualization. You need to use data to tell narratives that "pop" and connect with your audience.

Read More
September 25, 2017
Data Science Storytelling

Data science storytelling is more than just analytics and visualization. The data scientist needs to be a good narrative storyteller.

Read More
1 2 3 18
9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram