Job Description
As the Data Ingestion & ETL Engineer, you will play a pivotal role in transforming raw client data into a standardized schema, ensuring seamless integration with internal systems. This position requires a deep understanding of data ingestion, transformation, and ETL processes to facilitate efficient and accurate data flow across platforms. The ideal candidate will have hands-on experience with flat file formats (e.g., CSV), Snowflake architecture, SQL Server, and transformation tools such as dbt, SSIS, or Azure Data Factory. Experience with medallion architecture for managing ETL pipelines is highly desirable.
Moore is a data-driven constituent experience management (CXM) company achieving accelerated growth for clients through integrated supporter experiences across all platforms, channels and devices. We are an innovation-led company that is the largest marketing, data and fundraising company in North America serving the purpose-driven industry with clients across education, association, political and commercial sectors.
Check out www.WeAreMoore.com for more information.
Your Impact:
• Design and implement processes to ingest raw client data from flat files (e.g., CSV) into Snowflake data schemas.
• Develop automated workflows to standardize raw data into a pre-defined schema for consumption by internal systems.
• Build, maintain, and optimize ETL pipelines using medallion architecture to ensure scalability and reliability.
• Perform data profiling and quality checks to validate the accuracy and consistency of ingested data.
• Leverage transformation tools such as DBT, SSIS, or Azure Data Factory to design and execute data transformations effectively.
• Collaborate with internal mapping specialists to understand data requirements and ensure seamless integration of standardized data with downstream systems.
• Orchestrate data movement between Snowflake, SQL Server, and other internal systems using appropriate tools and frameworks.
• Monitor and fine-tune data ingestion and transformation processes for performance, ensuring minimal latency and optimal resource utilization.
• Identify and resolve bottlenecks or issues in data workflows proactively.
• Maintain comprehensive documentation of data ingestion processes, transformation rules, and ETL workflows.
• Provide regular status updates and reports on data conversion progress and potential risks.
Your Profile:
• Bachelor’s degree in computer science, data engineering, or a related field (or equivalent experience).
• 3-5 years of experience in data engineering, with a focus on data ingestion, ETL, and schema design.
• Proficiency in Snowflake, including schema design, data ingestion, and transformation processes.
• Strong experience with SQL Server and its integration with Snowflake.
• Expertise in medallion architecture for ETL pipeline development.
• Experience with transformation tools such as DBT, SSIS, or Azure Data Factory.
• Knowledge of file formats such as CSV, JSON, and Parquet.
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration abilities, with a focus on cross-team coordination.
• Detail-oriented with a commitment to producing high-quality, reliable data outputs.
• Experience with cloud platforms (AWS, Azure, or GCP), familiarity with version control systems such as Git, and knowledge of data governance and security best practices preferred.
How We’ll Support You:
• Join the largest marketing and fundraising company in North America serving the nonprofit industry where we prioritize innovation and professional growth.
• Collaborate with industry subject matter experts with over 5,000 employees across the enterprise.
• To help you stay energized, engaged and inspired, we offer a wide range of benefits including comprehensive healthcare, paid holidays and generous paid time off so you can have the time and space to recharge and pursue your other passions and be with the people you care about.
• Moore is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Responsibilities
- As the Data Ingestion & ETL Engineer, you will play a pivotal role in transforming raw client data into a standardized schema, ensuring seamless integration with internal systems
- This position requires a deep understanding of data ingestion, transformation, and ETL processes to facilitate efficient and accurate data flow across platforms
- Design and implement processes to ingest raw client data from flat files (e.g., CSV) into Snowflake data schemas
- Develop automated workflows to standardize raw data into a pre-defined schema for consumption by internal systems
- Build, maintain, and optimize ETL pipelines using medallion architecture to ensure scalability and reliability
- Perform data profiling and quality checks to validate the accuracy and consistency of ingested data
- Leverage transformation tools such as DBT, SSIS, or Azure Data Factory to design and execute data transformations effectively
- Collaborate with internal mapping specialists to understand data requirements and ensure seamless integration of standardized data with downstream systems
- Orchestrate data movement between Snowflake, SQL Server, and other internal systems using appropriate tools and frameworks
- Monitor and fine-tune data ingestion and transformation processes for performance, ensuring minimal latency and optimal resource utilization
- Identify and resolve bottlenecks or issues in data workflows proactively
- Maintain comprehensive documentation of data ingestion processes, transformation rules, and ETL workflows
- Provide regular status updates and reports on data conversion progress and potential risks
Requirements
- The ideal candidate will have hands-on experience with flat file formats (e.g., CSV), Snowflake architecture, SQL Server, and transformation tools such as dbt, SSIS, or Azure Data Factory
- Bachelor’s degree in computer science, data engineering, or a related field (or equivalent experience)
- 3-5 years of experience in data engineering, with a focus on data ingestion, ETL, and schema design
- Proficiency in Snowflake, including schema design, data ingestion, and transformation processes
- Strong experience with SQL Server and its integration with Snowflake
- Expertise in medallion architecture for ETL pipeline development
- Experience with transformation tools such as DBT, SSIS, or Azure Data Factory
- Knowledge of file formats such as CSV, JSON, and Parquet
- Strong analytical and problem-solving skills
- Excellent communication and collaboration abilities, with a focus on cross-team coordination
- Detail-oriented with a commitment to producing high-quality, reliable data outputs