Cloud Data Solution Architect

Pune, Maharashtra, India
Apply
Cloud Data Solution Architect - 78296
Description

JOB DESCRIPTION

Skill: Cloud Data Solution Architect

Role: T0

Our Big Data Competency Center offers consulting services for big data development, tool evaluations, solution architecture and design, and end-to-end implementation services using big data ecosystems like Hadoop, NOSQL, graph databases, and appliances. Our state-of-the-art Advanced Technology Centers in the United States and India have the infrastructure to conduct R&D as well as PoCs across multiple technologies. We help measure enterprise analytics maturity, deliver an analytics roadmap, and recommend embedding analytics into existing operations to measure business outcomes. Our Advanced Analytics Competency Center offers comprehensives services in business analytics that provide decision support, trending, and forecasting.

The service offering range spans from traditional BI use cases to more complex and advanced analytical solutions like Social Analytics, Web Data Analytics, Text Analytics, Real-time BI, and Predictive Analytics (Predictive Analytics Models such as Targeting Customers by Direct Marketing, Market Basket Analysis, Case Management), Model Implementation, Advanced Data Visualization, End User Application, etc.

Key responsibility:

 

  • Setup, administration, monitoring, tuning, optimizing, governing Large Scale Hadoop Cluster and Hadoop components: On-Premise/Cloud to meet high availability/uptime requirements.
  • Design implement new components and various emerging technologies in Hadoop Echo System, and successful execution of various Proof-Of-Technology (PoT) / Proof-Of-Concepts (PoC).
  • Collaborate with various cross functional teams: infrastructure, network, database, application for various activities: deployment new hardware/software, environment, capacity uplift etc
  • Cloud native stack knowledge( Snowflake, Redshift) experience.
  • Data migration experience (AWS , Snowflake) .
  • Work with various teams to setup new Hadoop users, security and platform governance;
  • Create and executive capacity planning strategy process for the Hadoop platform.
  • Work on cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Ambari etc.
  • Performance tuning of Hadoop clusters and various Hadoop components and routines.
  • Monitor job performances, file system/disk-space management, cluster database connectivity, log files, management of backup/security and troubleshooting various user issues.
  • Harden the cluster to support use cases and self-service in 24x7 model and apply advanced troubleshooting techniques to on critical, highly complex customer problems.
  • Contribute to the evolving Hadoop architecture of our services to meet changing requirements for scaling, reliability, performance, manageability, and price.
  • Setup monitoring and alerts for the Hadoop cluster, creation of dashboards, alerts, and weekly status report for uptime, usage, issue, etc.
  • Design, implement, test and document performance benchmarking strategy for platform as well for each use cases.
  • Act as a liaison between the Hadoop cluster administrators and the Hadoop application development team to identify and resolve issues impacting application availability, scalability, performance, and data throughput.
  • Research Hadoop user issues in a timely manner and follow up directly with the customer with recommendations and action plans.
  • Work with project team members to help propagate knowledge and efficient use of Hadoop tool suite and participate in technical communications within the team to share best practices and learn about new technologies and other ecosystem applications.
  • Automate deployment and management of Hadoop services including implementing monitoring.

Drive customer communication during critical events and participate/lead various operational improvement initiatives

Primary Locations
Pune, Maharashtra, India
Job Type
Experienced
Skill
CTE-AWS Data
Qualification

Qualifications

  • 10+ Years of Strong development experience in Hadoop/Big Data experience with an overall industry experience of 6+ years at various levels with good understanding in Java programming.
  • Strong Experience with Hadoop ETL/Data Ingestion: Sqoop, Flume, Hive, Spark, HBase.
  • Experience in Real Time Data Ingestion using Kafka, Storm, Spark or Complex Event Processing (CEP).
  • Experience in Hadoop Data Consumption and Other Components: Hive, Hue HBase, Phoenix, Spark, Mahout, Pig, Impala, Presto.
  • Experience with open source configuration management and deployment tools such as Puppet or Chef and Scripting Python/Shell/Perl/Ruby/Bash/PowerShell.
  • Bachelor's degree in a computer related field or equivalent professional experience is required. 

About Virtusa

Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 30,000+ people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us.

Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence.

Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Travel
No
Job Posting
19/10/2022
Job Search Form

Join Virtusa

 

We only accept the following file extensions: .pdf, .docx or .doc
Maximum file size: 1 MB
File name must not include special characters or spaces (e.g. “name_resume.pdf”)

Please attach your CV/Resume, ensure it is in the correct format and smaller than 1MB.