Thank you for your application. Your profile and application details have been forwarded to our HR department.
Data Engineer Goa, India | Full-Time
This position will primarily be responsible for designing, developing, and maintaining robust ETL/ELT pipelines for data ingestion, transformation, and storage. The role also involves designing and developing scalable data solutions.
The person will work with a team responsible for ensuring the availability, reliability, and performance of data systems. The candidate will design, develop, and maintain scalable data pipelines and infrastructure on cloud platforms (Azure, AWS, or GCP).
The person will lso work with a collaborative team responsible for driving client performance by combining data-driven insights and strategic thinking to solve business challenges. The candidate should have strong organizational, critical thinking, and communication skills to interact effectively with stakeholders.
Responsibilities:
- Design, build, and optimize data ingestion pipelines using Azure Data Factory, AWS Glue, or Google Cloud Data Fusion
- Ensure reliable extraction, transformation, and loading (ETL/ELT) of large datasets from various sources
- Implement and manage data storage solutions like Databricks, ensuring high performance and scalability
- Develop and orchestrate data pipelines using Apache Airflow for workflow automation and job scheduling
- Collaborate with database administrators and data architects to optimize database schemas
- Develop and maintain complex BI reporting systems to provide actionable insights
- Work closely with business stakeholders to understand requirements and translate them into technical specifications
- Monitor data quality and integrity across pipelines
- Maintain comprehensive documentation for data processes, pipelines, and reporting systems
- Collaborate with cross-functional teams to streamline data operations
- Stay updated on emerging data engineering technologies and best practices to enhance system performance
Technical Qualifications:
- Experience in data ETL tools and cloud platforms like Azure Data Factory, Azure Databricks, AWS Glue, Amazon EMR, Databricks on AWS, Google Cloud Data Fusion, or Databricks on Google Cloud
- Experience in Apache Airflow for workflow orchestration and automation
- Experience in SQL for data manipulation and querying
- Experience with at least one BI tool like Power BI, Tableau, or Google Looker
- Experience with relational databases (e.g., SQL Server, MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB)
- Proficiency in Python programming
- Proficiency in developing and maintaining data visualization and reporting dashboards
- Familiarity with data warehousing solutions and big data frameworks (Apache Spark)
Personal Skills:
- Strong analytical skills: ability to read business requirements, analyze problems, and propose solutions
- Ability to identify alternatives and find optimal solutions
- Ability to follow through and ensure logical implementation
- Quick learner with the ability to adapt to new concepts and software
- Ability to work effectively in a team environment
- Strong time management skills, capable of handling multiple tasks and competing deadlines
- Effective written and verbal communication skills
Education and Work Experience:
- Background in Computer Science, Information Technology, Data Science, or a related field preferred
- Minimum 3 years of total experience, with at least 2 years of relevant experience in data engineering and data pipeline development
- Certification in Databricks, Azure Data Engineering, or any related data technology is an added advantage
If that's not your area, check our other 13 Open Positions
If that's not your area, check our other 13 Open Positions
Data Engineer
Apply online