The ideal candidate will use their experience in Hadoop/MapReduce/Accumulo/Spark to develop multiple cloud environments. Ideally, the candidate has worked in an environment where they have stood up a new system into operational status. Significant experience needs with software development with HDFS & MapReduce, along with development with Accumulo ingest and search APIs.
Looking for someone with experience developing custom filters and iterators to control the data being returned in scan operations. A big plus if experience includes system administration with these cloud systems.
• Java coding using Eclipse integrated Development Environment (IDE)
• Software compilation using Maven
• Java development using the Spring Framework API
• Java service integration, e.g. Spring Integration, Mule, or Apache Camel
• Source code control systems
• Unit testing using JUnit testing framework
• Development using XML
• Unix Shell Scripting
• Object Oriented Java programming
A TS/SCI security clearance with polygraph.