Data Content Services Lab
We are the owners of the Enterprise Data Hub - a Hadoop implementation based upon Hive and Spark, currently favouring Scala. XXX has also signed up to a strategic partnership with Google to utilise their Cloud Platform services (GCP) to create a new strategic platform for the Group to prepare our Bank of the Future service for customers. We anticipate very significant opportunity for review of our systems, data processing methods and approaches which you would be a key part of
Software Engineer (Big Data)
Role Responsibilities:
Build solutions that ingest data from source systems into our big data platform, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes
Create high quality code that is able to effectively process large volumes of data at scale
Put efficiency and innovation at the heart of the design process to create design blueprints (patterns) that can be reused by other teams delivering similar types of work
Use modern engineering techniques such as DevOps, automation and Agile to deliver big data applications efficiently
Produce code that is in-line with team, industry and group best practice using a wide array of engineering tools such as GHE Github Enterprise, Jenkins, Urbancode, Cucumber, Xray etc
Work as part of an Agile team, taking part in relevant ceremonies and always helping to drive a culture of continuous improvement
Work across the full software delivery lifecycle from requirements gathering/definition through to design, estimation, development, testing and deployment ensuring solutions are of a high quality and non-functional requirements are fully considered
Consider platform resource requirements throughout the development lifecycle with a view to minimising resource consumption
Once Cloud is proven within the bank, help to successfully transition on-prem applications and working practices to GCP