- Person will be responsible to Perform Big Data Administration and Engineering activities on multiple Hadoop, Kafka, Hbase and Spark clusters
- Work on Performance Tuning and Increase Operational efficiency on a continuous basis
- Monitor health of the platforms and Generate Performance Reports and Monitor and provide continuous improvements
- Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
- Develop and enhance platform best practices
- Ensure the Hadoop platform can effectively meet performance SLA requirements
- Responsible for Big Data Production environment which includes Hadoop (HDFS and YARN), Hive, Spark, Livy, SOLR, Kafka, Airflow, Nifi, Hbase etc
- Perform optimization, debugging and capacity planning of a Big Data cluster
- Perform security remediation, automation and self heal as per the requirement
- 3+ years of work experience with a Bachelor s Degree or an Advanced Degree
- Hands on Experience in Big Data production clusters -Hadoop (HDFS and Yarn) Hive , Spark, Kafka is must.
- Minimum 3 years of work experience in maintaining, optimization, issue resolution of Big Data large scale clusters, supporting Business users and Batch process.
- Hands-on Experience No SQL Databases HBASE is plus
- Prior Experience in Linux / Unix OS Services, Administration, Shell, awk scripting is a plus
- Excellent oral and written communication and presentation skills, analytical and problem solving skills
- Self-driven, Ability to work independently and as part of a team with proven track record
- Experience on Hortonworks distribution or Open Source preferred
- Incumbent must make themselves available during core business hours
- Occasional weekend and evening hours needed, on a rotational basis for operational support.
- This position requires the incumbent to travel for work 5 % of the time.