Big Data Java Spark Developer

Tampa, Florida, United States
Apply
Big Data Java Spark Developer - CREQ131582
Description
Job ResponsibilitiesDesign & implementation of scalable & fault tolerant ETL Pipelines on BigData Platform to store & process terabytes of contract informations from upstream sources with high availability Actively work on performance tuning techniques through understanding Spark DAGs on data structures on both Relational & BigData Platforms giving high performance to both ETL & Reporting ComponentsCreate mock-ups and proof of concept from business requirements when necessaryPerforms ad-hoc data research & analysis, provide written summaries of results for non-technical business usersWorking with the onsite development team and providing necessary overlap coverage to ensure smooth transition and communication between offshore and onsiteWork closely with multiple teams(Business Analyst, ETL, DB team, Infra, Support, etc )Work collaboratively in a small, cross-functional Global Team Job Qualifications:6 + years of application/software development 3 + Years of hands-on experience and performance tuning on BigData Technologies like Apache Spark , Hive, Hadoop is mustMust have worked on ETL technologies like AbInitio or Talend etcMust have SQL and Linux shell scripting experience.Ability to work independently, multi-task, and take ownership of various analyses or reviews. Should be able to lead offshore resources.Experience with JAVA(Core Java, J2EE, Spring Boot Restful Services), Python, Web services (REST, SOAP), XML, Java Script, Micro services, SOA etc Experience with vendor products like Tableau, Arcadia, Paxata, KNIME is a plusExperience with developing frameworks and utility services including logging/monitoringExperience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.).Working experience with Financial application / Finance processes Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark Experience with API development and use of data formats is a plusKnowledge on NOSQL Databases like MongoDB, Hbase, Cassandra etc is aplus. Has to be results-oriented, willing and able to take ownership of engagementsshould have strong analytical skills
Primary Location
Tampa, Florida, United States
Job Type
Experienced
Skill
AID-Big Data
Qualification

Job Responsibilities
Design & implementation of scalable & fault tolerant ETL Pipelines on BigData Platform to store & process terabytes of contract informations from upstream sources with high availability
Actively work on performance tuning techniques through understanding Spark DAGs on data structures on both Relational & BigData Platforms giving high performance to both ETL & Reporting Components
Create mock-ups and proof of concept from business requirements when necessary
Performs ad-hoc data research & analysis, provide written summaries of results for non-technical business users
Working with the onsite development team and providing necessary overlap coverage to ensure smooth transition and communication between offshore and onsite
Work closely with multiple teams(Business Analyst, ETL, DB team, Infra, Support, etc )
Work collaboratively in a small, cross-functional Global Team

Job Qualifications:
6 + years of application/software development
3 + Years of hands-on experience and performance tuning on BigData Technologies like Apache Spark , Hive, Hadoop is must
Must have worked on ETL technologies like AbInitio or Talend etc
Must have SQL and Linux shell scripting experience.
Ability to work independently, multi-task, and take ownership of various analyses or reviews. Should be able to lead offshore resources.
Experience with JAVA(Core Java, J2EE, Spring Boot Restful Services), Python, Web services (REST, SOAP), XML, Java Script, Micro services, SOA etc
Experience with vendor products like Tableau, Arcadia, Paxata, KNIME is a plus
Experience with developing frameworks and utility services including logging/monitoring
Experience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.).
Working experience with Financial application / Finance processes
Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark
Experience with API development and use of data formats is a plus
Knowledge on NOSQL Databases like MongoDB, Hbase, Cassandra etc is aplus.
Has to be results-oriented, willing and able to take ownership of engagements
should have strong analytical skills

Travel
No
Job Posting
31/08/2022
Job Search Form

Join Virtusa

 

Please note only files with .pdf, .docx or .doc file extensions are accepted. Max file weight: 512KB

Please attach your CV/Resume, ensure it is in the correct format and smaller than 512KB.