Brief Job Description: Seeking an experienced SQL Analyst to design, implement, analyze, monitor, and maintain data services using Cloudera, Big Data and Microsoft SQL Server technologies for analytics and machine learning solutions. The successful candidate will be responsible for the development and maintenance of data warehouses, executing and supporting ETL data loads and developing new features and enhancements using Spark and Python. The analyst will collaborate with a globally disperse development team in an Agile development environment. Very good communication and ability to articulate technical and business requirements are required. Must haves: At least 2+ years of experience with Big Data eco-system including tools such as Hadoop, Spark, MapReduce, Hive and Impala At least 2+ years of experience developing Spark (pySpark) applications for large-scale data processing At least 5+ years of experience using Structured Query Language (SQL) for data analysis At least 3+ years of experience with Python code development Experience working on BI/DW projects designing, developing, testing and troubleshooting Extract Transfer Load (ETL) processes Nice to haves: Experience with Denodo data virtualization Experience with Linux Scripting Familiarity with Agile Methodologies, Kanban and/or Scrum Show more Show less