Looking for a

Data Engineer

POS-342
Location: Remote
Type: Full-time
Seniority: Senior

About Us:

As a Data Engineer at Kenility, you’ll join a tight-knit family of creative developers, engineers, and designers who strive to develop and deliver the highest quality products into the market.

 

Technical Requirements:

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field.
  • 3+ years of hands-on experience in Data Engineering or in a similar position, with a solid background in ETL development and data pipeline implementation.
  • Strong command of SQL and Python for building and maintaining data processing solutions.
  • Practical experience using PySpark and/or Pandas to manage and process large volumes of data.
  • Good understanding of data warehousing principles and exposure to platforms such as AWS Redshift, Google BigQuery, or Snowflake, including performance tuning for high-volume environments.
  • Proven expertise in database development, data modeling, schema design, and optimization strategies that support scalability.
  • Experience creating and maintaining automated tests for data pipelines and related processes.
  • Familiarity with Unix/Linux environments and shell scripting for development and operational tasks.
  • Working knowledge of CI/CD practices applied to deploying and maintaining data processing jobs.
  • Solid understanding of the Software Development Life Cycle and experience collaborating within cross-functional development teams.
  • Strong knowledge of Credit and Fintech domains, with a clear understanding of how data supports products, workflows, and business operations in these areas.
  • Minimum Upper Intermediate English (B2) or Proficient (C1).

 

Tasks and Responsibilities:

  • Build, enhance, and support reliable data pipelines and ETL workflows to move and transform data into the data warehouse.
  • Develop and improve SQL queries and Python-based processing jobs to handle large-scale data operations efficiently.
  • Establish automated validation and quality control mechanisms to preserve data consistency and accuracy.
  • Track pipeline behavior, identify technical issues, and implement improvements to increase performance, stability, and scalability.
  • Work closely with product managers, analysts, and other stakeholders to understand business needs and deliver effective data-driven solutions.
  • Produce and maintain technical and design documentation related to pipelines, architectures, and supporting systems.
  • Take part in design discussions and code reviews to help uphold engineering best practices and code quality.
  • Apply data modeling principles and schema design approaches that enable scalable storage and efficient querying.
  • Support the adoption and integration of CI/CD workflows for the deployment and management of data jobs.
  • Follow SDLC standards and contribute actively as part of the development team, including updates and improvements to ETL tools and processes.

 

Soft Skills:

  • Responsibility
  • Proactivity
  • Flexibility
  • Great communication skills
Join us

Ready to be part of our team?

Tell us what you're working on—we’ll help you design, scale, and deliver AI-powered software that drives real business outcomes.
Thank you!
Your message has been sent.
We will review it shortly and get back to you.