Expected Start Date Sep 3rd, 2018
Skills / JD :
Open Positions - 4
Work Location - Infopark, onsite
Very strong in computer science fundamentals and quick learner and ready to learn and adopt new technologies quickly
- Good understanding of HDFS, Spark, Scala, Hive, Kafka, NoSQL (HBase, Cassandra, MongoDB - any one of them ) , Sqoop and Zookeeper
- Good understanding of underlying Hadoop Architectural concepts and distributed computing paradigms.
- Minimum 6+ years of hands-on experience with Hadoop and other Big data Technologies.
- Should be familiar with all Hadoop Ecosystem components and Hadoop Administration Fundamentals
- Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms
- Hands-on programming experience in Scala -Spark or Java-Spark
- Hands on experience with major components like Hive, HBase, Spark, Pig, Sqoop, Flume, Kafka, MapReduce
- Experience with NO-SQL databases – Key-value, Document, Graph, Column Family and knowledge of when to use which kind of DB.
- Proven Ability to contribute to open source community and up-to-date with the latest trends in the Big Data Space.
Experience: 5 – 8 Years
Resources : 4 Nos
Desired experience in years : 5 to 8
Duration of engagement in months: 4 to 6