-5+ years with Core Java and full SDLC is a MUST
-1+ year hands on experience with Spark Core and Spark SQL is a MUST
-Quick learner and proactive personality is a MUST
-Willingness to be flexible and do whatever you can to bring value to the --team is a MUST
-HBase, Hive, Impala, Spark - MUST KNOW. We need someone who knows --how/why we use these technologies and also the best practices
-This individual will be a SME for Big Data and will be responsible for hand holding
-Experience working with multi-terabyte data sets (preferably 100’s of -terabytes) is a MUST
-Familiarity with data modeling is a MUST
-Experience in RISK space within Financial sector is HUGE PLUS
-Experience with Spring framework & REST API
--Experience with Git distributed version control, or Maven, for repository is a -MUST
-Experience working in a Linux environment
-Experience with Hazelcast or Gemfire/Gridgain in-memory data fabric is a -plus but NOT REQUIRED.
-Contribute to application design
-Be one of the developers for the distributed compute and report platform
-Work with peers in the team, and in the bigger group as part of code --reviewsand design sessions
-Work with technical and business partners to understand needs and provide -innovative solutions
备注:本职位是由芝麻桥平台内推职位,您可以直接投递该职位信息,也可以发送简历至zhimaqiao@ameson.org