Logo

dev-resources.site

for different kinds of informations.

Calling All Senior Data Engineering Innovators!

Published at
7/12/2024
Categories
dataengineering
datascience
pyspark
aws
Author
the_host_4404
Author
13 person written this
the_host_4404
open
Calling All Senior Data Engineering Innovators!

Are you a data wizard who thrives on solving complex challenges and crafting elegant solutions? Mactores is seeking a Senior AWS Data Engineer to join our team of passionate problem-solvers and help us Deliver the Best data platform solutions for our clients.

Who We Are

Mactores is a trusted leader in providing modern data platform solutions. Since 2008, we have empowered businesses to accelerate their value through automation. We collaborate closely with our clients to strategize, navigate, and accelerate their digital transformation journeys.

What You'll Do

Engineer Robust Data Pipelines: Design, develop, and maintain scalable data pipelines using Amazon EMR or Glue, ensuring seamless data flow.
Craft Efficient Data Models: Create intuitive data models and end-user querying solutions using Amazon Redshift, Snowflake, Athena, and Presto.
Orchestrate Data Magic: Build and maintain Airflow orchestration for efficient data pipeline management.
Collaborate and Innovate: Partner with cross-functional teams to understand data needs, design solutions, and troubleshoot issues.
Stay Ahead of the Curve: Continuously learn and evaluate new AWS data technologies to ensure our solutions are always cutting-edge.

What We're Looking For

Technical Expertise: 3+ years of experience with PySpark, SQL, Amazon EMR or Glue, Redshift or Snowflake, Athena, Presto, and Airflow.
Problem-Solving Prowess: A knack for identifying and resolving complex data challenges with innovative solutions.
Collaborative Spirit: A strong communicator and team player who enjoys working with diverse stakeholders.
Curiosity and Drive: A passion for learning new technologies and a desire to make a real impact.

Bonus Points
AWS Data Analytics Specialty Certification
Experience with Agile development methodologies

Life at Mactores

Be Bold: We embrace out-of-the-box thinking and encourage you to do the same.
Enjoy the Challenge: We tackle intricate data problems for our clients and celebrate learning and growth.
Own It: We believe in taking ownership of our work and contributing to the long-term success of our clients and our company.

Perks You'll Love

Permanent Remote Work: Enjoy the flexibility and freedom of working from home.
Comprehensive Health Insurance: We offer 7,00,000 INR health insurance coverage through ICICI Lombard.
Flexible Hours: Work when you're most productive.
Challenging Projects: Gain valuable experience working on complex projects for US clients.
Cutting-Edge Tech: Explore and utilize the latest AWS data technologies.
Professional Development: We cover the cost of training and exams for professional certifications.
Annual Meet-Up: Connect with your team in person and have fun!

Are You Ready to Unleash Your Data Superpowers?

If you're a Senior Data Engineer who's curious, bold, and eager to take action, we invite you to join our team. Let's transform data into actionable insights and drive real business value together!

How To Apply

Please e-mail your resume/CV, Code Repo (GitHub, etc..), and LinkedIn/Website URL to [email protected]

Please feel free to leave questions in the comments section.

pyspark Article's
30 articles in total
Favicon
Infraestrutura para análise de dados com Jupyter, Cassandra, Pyspark e Docker
Favicon
Intro to Data Analysis using PySpark
Favicon
Azure Synapse PySpark Toolbox Contents
Favicon
Azure Synapse PySpark Toolbox 001: Input/Output
Favicon
Mastering Dynamic Allocation in Apache Spark: A Practical Guide with Real-World Insights
Favicon
Auditoria massiva com Lineage Tables do UC no Databricks
Favicon
Platform to practice PySpark Questions
Favicon
Entendendo e aplicando estratégias de tunning Apache Spark
Favicon
[API Databricks como serviço interno] dbutils — notebook.run, widgets.getArgument, widgets.text e notebook_params
Favicon
Pytest Mocks, o que são?
Favicon
Achieving Clean and Scalable PySpark Code: A Guide to Avoiding Redundancy
Favicon
Real-Time Streaming Analytics with PySpark on AWS using Kinesis and Redshift.
Favicon
Hiring Alert!
Favicon
PySpark optimization techniques
Favicon
Creating a data pipeline using Dataproc workflow templates and cloud Schedule
Favicon
Running pyspark jobs on Google Cloud Dataproc
Favicon
Calling All Senior Data Engineering Innovators!
Favicon
Comprehensive Guide to Schema Inference with MongoDB Spark Connector in PySpark
Favicon
Checking object existence in large AWS S3 buckets using Python and PySpark (plus some grep comparison)
Favicon
Troubleshooting Kafka Connectivity with spark streaming
Favicon
PySpark: missing value
Favicon
Spark: Introduction
Favicon
Template for design document of Apache Spark project
Favicon
Building an Anime Recommendation System with PySpark in SageMaker
Favicon
PySpark & Apache Spark - Overview
Favicon
Batch Processing using PySpark on AWS EMR
Favicon
Running PySpark in JupyterLab on a Raspberry Pi
Favicon
Python Interpreter in Docker and Pyspark Tests in Docker
Favicon
Apply Function Only Works on the First 1000 Rows of PySpark.Pandas DF
Favicon
create UDF in pyspark to join 2 tables

Featured ones: