Consultant
Hyderabad, AP, IN
Consultant - (CREQ139905)
Description
Create Scala/Spark jobs for data transformation and aggregationProduce unit tests for Spark transformations and helper methods
Write Scaladoc-style documentation with all code
Design data processing pipelines
Deep understanding of distributed systems (e. g. Partitioning, replication, consistency, and consensus)
Good experience in writing Spark applications using Java and Scala.
Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API.
Work closely with Business Analysts team to review the test results and obtain sign of
Prepare necessary design/operations documentation for future usage
Perform peers Code quality review and be gatekeeper for quality checks
Hands-on coding, usually in a pair programming environment
Working in highly collaborative teams and building quality code
The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation.
The candidate must have a good understanding of consumer financial products, data systems and data environments, and processes that are necessary for the implementation of Risk and Finance models
Primary Location
: IN-AP-HyderabadSchedule
: Full TimeJob Type
: ExperiencedTravel
: NoJob Posting
: 22/07/2022, 10:54:49 AMApply Now
Are you the right fit for this role? Please submit you application and resume/CV and we will be in touch.
Apply Now
Are you the right fit for this role? Please submit you application and resume/CV and we will be in touch.
We’re sorry, there was some trouble processing your submission. The error code is:
Please ensure all fields have been filled.