
We are looking for a Senior Data Engineer specialized in Python, PySpark, AWS, and data to join our clients dynamic team. You will play a key role in designing, optimizing, and improving data pipelines for the ingestion, enrichment, and exposure of classified and transactional data on AWS. You will work directly under the supervision of a Data Engineering Team Lead, who will organize tasks and ensure the smooth delivery of the project.
Analyze and improve existing data pipelines to optimize performance, cost efficiency, and scalability.
Transition batch/snapshot pipelines to delta-based data processing pipelines.
Develop and maintain Terraform modules for efficient infrastructure management on AWS.
Migrate data pipelines from Google Cloud Platform to AWS while minimizing downtime and ensuring high reliability.
Design and implement DataDog dashboards and alerting systems to enable proactive monitoring of data pipeline performance.
Stay up to date with emerging technologies and actively promote relevant innovations and improvements within the team.
Support and guide junior team members, share expertise and best practices, and contribute to the overall growth of the team.
Required Technical Skills:
Desired Qualities:
Join a fast-scaling tech platform revolutionizing marketplaces at scale. Work with elite engineering teams on mission-critical AI, data, and optimization challenges.
Extremely competitive comp + equity. Bucharest or Belgrade offices.
Apply to build the future of intelligent platforms.