Quantiful are building QU: the world's first consumer behaviour led planning tool. Using the latest AI technology, QU senses consumer demand to forecast your customers’ purchasing behaviours and market trends with unprecedented accuracy. Revolutionise the way you understand your consumer, put power back in your supply chain planning and amplify your financial results.
Role Overview:
We’re looking for an Intermediate Data Engineer to join the team and help build and improve our next generation AI/ML pipeline. You will be working along with the Data Engineering and Data Science team to deliver consumer-led forecasting.
You need to be a self-motivated learner, curious, someone who’s always trying to learn more about the tools they use. You need to be someone who’s more excited about building tools for internal data engineers and scientists rather than writing code for a customer-facing product. Our analytics stack consists of a number of state-of-the-art technologies, and you’ll need to learn all of them to become effective in this role.
Driven by your passion for data and utilizing various programming methods and cloud-based architectures, you will work with internal teams to define and craft innovative solutions in support of our leading global product.
What is in it for you?
- Joining a team that focuses first and foremost on learning and development using world-leading technology on an AI/ML product.
- Working in a company that focuses hiring based on attitude and a base set of skills in an environment which promotes ideas and experimentation rather than a laundry list of technologies.
- Opportunity to work on some of the most cutting-edge AI/ML projects in the world with truly purpose-led company.
- A flexible work culture focused on customer success.
Summary of your Role
- Co-own the design and management of data and AI/ML model pipelines in a position that is heavily focused on automation and enablement of structured and unstructured data science techniques.
- Work within a cloud infrastructure environment (largely AWS).
- Helping to ensure all systems adhere to industry best-practises for securely handling sensitive client data.
- Aiding in the automation and acceleration of software delivery through collaboration with team members and application of Data Engineering best practices including infrastructure-as-code approaches.
- Help develop and implement data collection systems and other strategies that optimize statistical efficiency and data quality.
- Support the further development and improvement of Quantiful’s ETL processes.
- Co-ownership of DevOps processes (CI/CD)
Experience Required
- Some commercial experience is needed – the position is open for anyone with 2+ years’ experience
- Strong Data pipeline, data integration and data manipulation experience, AWS experience is important.
- Data engineering and data modelling experience with a proven track record of supporting data science and complex machine learning services such as Sagemaker.
- A computer science/I.T. background.
- Strong knowledge in the Python programming language.
- Exposure to cloud infrastructure/databases.
- Experience using technologies such as: Docker, Amazon Web Services (S3, Athena, ECS, ECR, EC2, VPC), Git, GitHub. Exposure to Sagemaker, Step Functions, Terraform/CDK would be a plus.
Education / Knowledge and Skills Required
- Curious, always wanting to learn more about the tools and software you use.
- Degree in Software Engineering, or other relevant disciplines like Computer Science, Computer Systems, etc.
- Excellent communicator – written and verbal
- Collaborative, team player
- Creative problem solver