Overview: Seeking a talented Python Developer who is passionate about data to join Data Engineering team In this role, you will play a crucial part in developing and maintaining data pipelines, ensuring the reliability, efficiency, and scalability of our data systems. You will collaborate closely with cross-functional teams including Data Scientists, Analysts, and Software Engineers to drive data initiatives and support our business objectives. Responsibilities: Design, develop, and maintain robust data pipelines and ETL processes using Python and related technologies prioritising native coding over low code platforms. Collaborate with stakeholders to understand data requirements and translate them into technical solutions. Optimize and tune data workflows for performance, scalability, and reliability. Implement data quality checks and ensure data integrity throughout the data lifecycle. Work with cloud-based technologies such as AWS, GCP, or Azure to deploy and manage data pipelines. Develop and maintain documentation for data processes, systems, and best practices. Stay up to date with emerging technologies and industry trends related to data engineering. Large data processing (30-40M+ records/day) using pyspark technology. Requirement: Proven experience as a expert data engineer or Python Developer with a strong understanding of software development principles with a minimum of 4 years of experience. Hands-on experience with building and optimizing data pipelines using Python and related libraries/frameworks (e.g., Pandas, NumPy, Airflow). Experience in python dependency management e.g. libraries, packages Familiarity with SQL and database technologies (e.g., PostgreSQL, MySQL, or similar). Experience working with cloud platforms such as AWS, GCP, or Azure. Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. Experience with Big Data technologies (e.g., Hadoop, Spark) is a plus. Show more Show less