Hadoop System Administrator
ESO is a rapidly growing technology company passionate about improving community health and safety through the power of data. We provide software applications, interoperability and data management solutions to emergency medical services, fire departments and hospitals.
We’re small enough to be nimble and fun, but big enough to be a great, stable place to work. We serve more than 14,000 customers out of our offices in Austin, Texas and Des Moines, Iowa.
About the role
The Hadoop System Administrator is responsible for administering the enterprise Big Data clusters for our data platform product line. You will deploy and manage our Hadoop and Spark cluster environments, on bare-metal and cloud hosting including service allocation and configuration for the cluster, capacity planning, performance tuning, and ongoing monitoring.
More about you
You will be responsible for the care and feeding of our big data installation as well as the short- and long-term capacity planning based on our future growth plans.
Here’s just a snapshot of what your day-in-the-life will look like with the team:
- Collaborate with all other team members to ensure our big data platform is prepared to host new services and workloads as they are developed.
- Work with the development team to understand architectural work to assist with evaluation, decision-making and sequencing of the key technological infrastructures to support your product.
- Provide consistent performance evaluation and monitoring for the production Hadoop instance.
- Participate in continuous integration, production release management, solution validation.
Some of the things required to be successful in the role:
- Experience with installation and support of Enterprise Hadoop clusters (cloud and local datacenter).
- Experience with designing, capacity arrangement, cluster set up, security, performance fine-tuning, resource management, monitoring, trouble-shooting, structure planning, scaling and administration of Hadoop/Spark clusters.
- Experience with deploying and administering various cluster services like Zookeeper, YARN, HDFS, HBase, Hive, Oozie, Map Reduce, Kafka, Spark, Storm etc.
- Good networking knowledge (will be dealing with a lot of applications/services on top of Linux and networking).
- Solid Linux Administration Experience
- Solid working experience of Linux security concerns in both cloud and bare metal hosting environments.
- Familiarity with Cloudera and Hortonworks distros.
- Experience with major cloud vendors like AWS and Azure and administering clusters in the cloud.
- Physical and virtual networking experience.
Nice to haves for this role:
- Hadoop Administration certification.
- Experience with administering physical clusters.
- Knowledge of server hardware – storage, compute and networking components.
- Knowledge of administering in-memory SQL-on-Hadoop engines like Impala, Hive-on-Spark etc.
- Experience with administering Spark clusters and troubleshooting resource management issues.
- Familiarity with deploying Notebook environments like Zeppelin, Jupyter etc.
- Familiarity with deploying and administering BI platforms like Tableau Server.
Familiarity with NoSQL databases.