Our client is an established and expanding company within Risk Analytics. They develop modelling tools for companies to use to measure their financial risk exposure. They provide bespoke credit rating tools that enable clients to have specific insights when making decisions. They are looking for Data Engineers to join their expanding team.
- Build, Track and Maintain the flow of data within ETL (extract, transform, load) and analysis pipelines, ensuring successful processing and data validity
- Work with data scientists to identify optimal ways to prepare, store and navigate their datasets
- Work with the software and information technology teams to specify, design, and implement the infrastructure for storing, searching and integrating new datasets.
- Management of the data ecosystem, ensuring it meets required business service levels is well maintained and cost effective.
- Proficient programming skills in a scripting language - preferably Python.
- Experience of supporting/implementing data pipelines - using Apache Airflow
- Experience of data engineering best practises. E.g. CI/CD and using version control.
- Experience of data quality assessment and validation
It would be preferred if you have:
- Cloud Experience with AWS/GCP (preferably GCP - object storage using lambdas etc..)
- Experience writing SQL code (DBT).
- Working in an Agile Development environment.
It would be nice to have:
- Deployment / use of GCP Components including: DataFlow, PubSub, BigQuery, Cloud Composer,
- StackDriver and Data Catalog.
- Experience with graph databases – Neo4J.
- BigQuery, MS SQL Server or Postgres familiarity