For business leaders who are new to data engineering or related fields like data science and business intelligence, it can be helpful to familiarize yourself with commonly used terms. This glossary provides foundational context and information to help you understand key concepts in data engineering.
Advanced Analytics: The process of discovering deeper insights in data using sophisticated tools and techniques like machine learning, artificial intelligence, data mining, sentiment analysis, and more, which go beyond traditional business intelligence tools.
Apache Airflow: A platform that enables the programmable authoring, scheduling, and monitoring of workflows.
Artificial Intelligence (AI): A broad term used to describe engineered systems that are trained to perform tasks that typically require human intelligence.
Business Intelligence (BI): Strategies and systems used by enterprises to analyze data and make informed business decisions.
Big Data: Refers to large volumes of structured or unstructured data.
Big Data Processing: The process of extracting value or insights from big data using specialized software or frameworks such as Hadoop.
BigQuery: Google’s cloud data warehouse.
Cassandra: A database built by the Apache Foundation.
Data Architecture: A composition of models, rules, and standards that define the structure and interactions of data systems.
Data Catalog: An organized inventory of data assets that relies on metadata to facilitate data management.
Data Engineering: The process of making data useful through designing, building, and maintaining data pipelines that transform raw data into a usable format for analysis or data science modeling.
Data Ingestion: The process of moving data from one or multiple sources to a storage destination, where it can be processed and transformed for analysis or modeling.
Data Integration: Combining data from different sources into a unified view.
Data Lake: A storage repository where data is stored in its raw format, providing flexibility compared to more structured data warehouses.
Data Lineage: Describes the origin and changes to data over time.
Data Management: The practice of securely and effectively collecting, maintaining, and utilizing data.
Data Migration: The process of permanently moving data from one storage system to another, which may involve data transformation.
Data Mining: The process of discovering patterns, correlations, or anomalies in datasets to predict outcomes.
Data Pipeline: A set of steps that ingest and integrate data from raw sources, transforming it and moving it to a destination for analysis or data science. Data pipelines can be automated and maintained to ensure reliable data availability.
Data Science: The practice of using scientific methods, algorithms, and systems to derive insights from structured and unstructured data.
Data Visualization: The graphical representation of one or more datasets.
Data Warehouse: A storage system used for data analysis and reporting.
Database: A collection of structured data.
ETL (Extract, Transform, Load): The three-step data integration process used to blend data from different sources.
Flat File: A type of database that stores data in a plain text format.
Flink: A big data processing tool built by the Apache Foundation, capable of processing streaming data in real-time.
Hadoop / HDFS: Apache’s open-source software framework for processing big data, with HDFS referring to Hadoop Distributed File System.
Kafka: Apache Kafka, an open-source software platform for streaming data.
Kubernetes / k8s: An open-source system for automating application deployment, scaling, and management. Also known as k8s.
Machine Learning (ML): Algorithms designed to identify patterns in big data.
MapReduce: A component of the Hadoop framework used to access and process big data stored within the Hadoop File System.
Metadata: Data that describes and provides information about other data.
MySQL: An open-source relational database management system with a client-server model.
NoSQL: A non-relational database that offers flexibility and scalability for handling large volumes of data.
Open Source: Software that is freely available for use and modification by the community.
Parquet: A column-oriented data storage format within the Hadoop ecosystem.
PostgreSQL: A free, open-source relational database management system, commonly referred to as Postgres.
PySpark: A collaboration between Apache Spark and the Python programming language, providing an interface for data processing and analysis.
RedShift: Amazon’s cloud data warehouse solution.
S3: Amazon’s object storage service, providing scalable and durable storage for various types of data.
SQL: Structured Query Language, a domain-specific language used to interact with databases and manipulate data.
This glossary provides a brief overview of key terms in data engineering. It aims to equip business leaders with a foundational understanding of concepts relevant to data-driven decision-making and the management of large volumes of data.