Greetings of the day!!
I'm Arumugam Veera, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn: Arumugam Veera
About the Company:
At Techmango, we believe that true transformation happens at the intersection of deep engineering and business intelligence. For over 15 years, we have operated as a Gold Standard Service Provider, delivering over 350+ engineering solutions that help global organizations scale with resilience.
We are a team of 260+ technical experts who do more than just write code we build with purpose. Our philosophy is simple: Impact grows when strategy aligns with execution.
If you are looking for a career where your technical depth is matched by meaningful business context, Techmango is where you belong.
We're Hiring: Senior AWS Redshift Data Engineer
Primary skills Required: AWS, PySpark, AWS Glue, Redshift,Data Build Tool, S3-based Medallion architecture,SQL Server Query Optimization,Complex Queries,Lambda ,Lakehouse,ETL orchestration patterns,API,GitHub,CI/CD integration,modernization
Job Location: Madurai
Working Mode: Remote Work - For Onboarding & Monthly Once Travel Required to Madurai.
Job Type: Full-Time
Experience: 7+ Years
Shift Timing: 2 PM to 11 PM IST
Notice Period: Prefer Immediate Joiner Only
Tasks
We are looking for a seasoned Data Engineer to join our Engineering organization and take ownership of the systems that power our data-driven decisions. Youll architect and maintain high-performance data pipelines and databases across platforms like MSSQL, Redshift, and Snowflake Ensuring data is reliable, accessible, and optimized for scale. If you thrive on solving complex problems, love optimizing systems for performance, and want to be the go-to expert in a collaborative, high-impact environment, we'd love to meet you.
What You Will Do:
Design and maintain scalable, efficient, and well-partitioned schemas in MSSQL, Redshift, and Snowflake
- Architect and optimize complex queries, stored procedures, indexing strategies, and partitioning for large datasets
- Build, monitor, and maintain data pipelines that ensure timely and accurate delivery of data to internal and external consumers
- Own and enforce data refresh SLAs, ensuring availability, consistency, and reliability across production and reporting environments
- Collaborate with software engineers, analysts, and DevOps teams to ensure data models and queries align with product and reporting requirements
- Proactively identify and remediate performance bottlenecks, slow queries, and data inconsistencies
- Implement and manage database change workflows using schema migration/versioning tools
- Define and promote best practices for data access, security, compliance, and observability
Requirements
- What You Will Bring:
7+ years of experience in database engineering or backend systems development - Deep expertise with MySQL, Amazon Redshift, and Snowflake, including schema design and performance optimization
- Design, build, and maintain scalable data pipelines and ETL processes tailored to client needs.
- Proven track record maintaining data freshness SLAs and data quality across production pipelines
- Hands-on experience with T-SQL, LinkSQL, query optimization, and indexing strategies
- Experience with query optimization, mapping tables to blocks and partitions, sub-table structure and keying/indexing for efficiency
- Experience of relational data modeling and schema versioning in support of software development.
- Experience as the sole or lead database expert on a development team.
- Familiarity with source control systems (e.g., Git/Bitbucket) and CI/CD integration.
- Strong problem solving skills are desirable.
- Transforming product requirements into workable solutions, in collaboration with several development and testing teams.
- Writing robust functions, procedures and scripts using SQL.
- Being heavily involved in the day to day running of the business dealing with support and performance issues.
- Diagnose and troubleshoot data-related issues, providing quick and effective resolutions.
- Analyze data workflows and identify areas for improvement, recommending and implementing solutions to optimize performance.
- Utilize a proactive approach to foresee potential problems and address them before they impact operations.
Benefits
Why Join TechMango
- Work with a fast-growing global data engineering team.
- Opportunity to learn and grow in advanced cloud and analytics technologies.
- Competitive salary and performance-based incentives.
- A collaborative and innovation-driven work environment.
Interview Process:
- L1 Technical Discussion: Virtual Discussion Via Gmeet or Teams
- L2 Technical Discussion: In-Person (Face-to-Face) at the Chennai/Madurai location
- Client Interview: Virtual
- HR Discussion: Virtual / In-Person
- Onboarding: Mandatory visit to the Chennai/Madurai branch