Current Openings :

Tableau Business Intelligence Opportunities

We are currently hiring at our Gurgaon, India office.  
We will consider highly qualified candidates in most geographies.

Position Requirements:

  • At least 3 to 4+ years of IT experience in Tableau design and development, must have worked on large enterprise level application
  • 2 to 3+ years in delivering Analytic and canned reports as Tableau Developer
  • General development skills in data visualization & reporting
  • Experienced in the optimal technical methods and aesthetic organization to present data in Tableau reports & dashboards
  • Ability to create high-impact, meaningful analytics (e.g. heat maps, KPI's etc.) to help the business make informed decisions
  • Ability to maintain Tableau reports & analysis and work on enhancements to existing reports or creating new reports based on business needs
  • Ability to present Tableau solutions to the business and modify solutions based on user feedback
  • Ability to create Tableau reports that provide business with easy to understand patterns, visuals, KPIs and develop compelling visual analytics
  • Willingness to share knowledge and build capabilities within the team
  • Ability to use custom SQL for complex data pulls
  • Ability to draw upon full range of Tableau platform technologies to design and implement proof of concept solutions and create advanced BI visualizations
  • Desire to be a proactive contributor and subject matter expert
  • SQL reporting skills with the ability to create SQL views and write SQL queries.
  • Experience / Exposure / Working Knowledge on AWS Redshift and open source tools would be an added advantage
  • Experience with creation of users, groups, projects, workbooks and the appropriate permission sets for Tableau server logons and security checks
  • Technical oversight of the Tableau Server infrastructure from user setup to performance tuning in a high availability environment
  • Knowledge of Data Warehousing concepts, Relational and Dimensional data modeling concepts
  • SQL reporting skills with the ability to create SQL views and write SQL queries

Qualities we look for:

  • Adaptability
  • Intuitive brilliance and a fact-based analytic discipline 
  • Empathy, compassion, warmth, good humor and ability to work as part of a team
  • Superior communication skills
  • Willingness to work in a start-up and be part of something new
Hadoop Architect Opportunities

We are currently hiring at our Gurgaon, India office.  
We will consider highly qualified candidates in most geographies.

Job Summary:

The Hadoop Architect works closely with the data architecture team and developers to extend the Elevondata data lake framework in support of a multiple clients.  The variety of data across our Clients enterprise is extensive and will include a lot of unstructured and semi-structured data.

This individual will support our Senior Data Management Advisors in a technical capacity for the details of solution architecture for the development of the data layer with a focus on scalability, manageability, performance, security, data lineage, and metadata.  

The position requires a thorough understanding of the Hadoop ecosystem tools for the extension/build of data ingestion/storage, data transformation and data load routines.  Also requires understanding of data provisioning to downstream consumers and processes and eventual consumption by analysts for experimental query (Spark SQL, Impala, Python etc.)

Role Description:

The Hadoop Architect needs to help design and architect data ingestion, ELT, and data quality platform

Responsibilities:

  • Build, Deploy and Support custom data management applications on Hadoop
  • Build, Deploy and Support ETL/ELT to load data into Hadoop/NoSQL, and Scheduling concepts and tools in Hadoop Ecosystem
  • Design, Build, Deploy and Support schemas for data acquisition, transformations, and data integration 
  • Build, Deploy and Support solutions for metadata, data quality, security management
  • Build, Deploy and Support Hadoop data ecosystem, in-situ query, and Web Services
  • Performance tuning of a Hadoop/NoSQL environment

Qualifications:

  • Bachelor’s degree in Computer Science or equivalent experience
  • Specific experience required on Hadoop (HDFS) technology and associated Apache Open Source ecosystem (Hive, Pig, MapReduce, HBase, Pig, Sqoop et al)
  • ELT experience (possibly Pentaho Kettle or Talend), Sqoop
  • Exposure to NoSQL/columnar data stores like Cassandra, InfiniDB, ParAccel (RedShift) and document stores like MongoDB et al.
  • Deep proficiency with data management  – traditional RDBMS (Oracle, MS SQL et al) and MPP appliances (Netezza, Teradata et al) and open source DBMS (MySQL, PostgresSQL et al)
  • Working knowledge of Services Layer – Java, HiveQL, RESTful, JSON, Maven, Subversion, JIRA, Eclipse et al.
  • Ability to wear multiple hats spanning the software-development-life-cycle across Requirements, Design, Code Development, QA, Testing and Deployment
  • Experience with Object-Oriented Programming Languages (Java, C++, Python)
  • Excellent Problem Solving Skills
  • Excellent oral and written communication skills
  • Working knowledge of DR in Hadoop Ecosystem
  • Working knowledge of handling fine grained entitlements in Hadoop Ecosystem

Highly Desired Qualifications:

  • Minimum 8-10 years developing in data warehouses or digital analytics with 2-4 years in Hadoop ecosystem
  • Exposure to AWS Big Data Architecture and Services
  • Experience with MPR2.0/YARN Cluster Manager and implementation experience on at least one of the commercial Hadoop Distributions (Cloudera, Hortonworks, MapR, IBM etc.)
  • Experience working in an agile team environment.

If this sounds like you, drop us a line.

Files must be less than 500 KB.
Allowed file types: txt pdf doc.