I N T I N E R I
I N F O S O L
+91-981-847-6557

info@intineriinfosol.com

  • Home
  • About Us
  • Our Services
  • Blogs
  • Career
  • Contact Us
  • Get Started
  • Big Data Developer | Remote-Hadoop, HBase, Hive, Kudu, Spark, Spark Streaming, Spark SQL, Hive queries, Spark applications |


    Job Big Data Developer
    Location Remote,India
    Designation Big Data Developer - WFH
    Experience 4-12 Year
    Salary 5-18 Lakh
    Skills Hadoop, HBase, Hive, Kudu, Spark, Spark Streaming, Spark SQL, Hive queries, Spark applications
    Job Type Contract to Hire

    Description

    We are seeking a highly skilled L3 Big Data Developer to join our team. The ideal candidate will have extensive experience in designing, developing, and implementing scalable Big Data solutions using the Hadoop ecosystem, along with strong expertise in technologies like HBase, Hive, Kudu, and Spark. You will be responsible for optimizing data storage, retrieval operations, and real-time processing pipelines to deliver high-performance analytics solutions.

    If you have a passion for working with large-scale distributed systems, real-time data processing, and machine learning applications, this is an exciting opportunity to work on cutting-edge projects and drive the transformation of data into actionable insights.

    Job Responsibilities

    1. Big Data Solution Design & Development:

      • Architect and develop highly scalable Big Data solutions using Hadoop ecosystem technologies, including HBase, Hive, Kudu, and Spark.
      • Create and maintain efficient data models and schemas tailored to business needs.
    2. Data Processing & Transformation:

      • Develop complex Hive queries and data pipelines to transform raw data into structured formats for reporting and analysis.
      • Implement real-time data processing pipelines using Spark Streaming and Spark SQL, ensuring low latency and high throughput.
    3. Performance Optimization:

      • Optimize Spark applications for performance and resource efficiency, including tuning transformations, partitioning strategies, and utilizing in-memory caching.
      • Continuously monitor and improve the performance of data processing jobs.
    4. Machine Learning & Analytics:

      • Utilize Spark MLlib for machine learning tasks like classification, regression, clustering, and collaborative filtering.
      • Build Kudu tables for real-time analytics and ensure fast analytical queries using its hybrid architecture.
    5. Data Ingestion & Storage Optimization:

      • Design and implement HBase schemas to accommodate evolving business requirements and optimize data ingestion pipelines.
      • Leverage Kudu for fast data ingestion and ensure efficient data storage and retrieval operations.
    6. Collaboration & Documentation:

      • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions.
      • Document architecture, design decisions, and processes for future reference and team knowledge sharing.

    Required Skills & Qualifications:

    • Strong experience in Hadoop ecosystem technologies like HBase, Hive, Kudu, and Spark.
    • Proficiency in writing and optimizing complex Hive queries.
    • Experience with Spark Streaming, Spark SQL, and Spark MLlib for machine learning tasks.
    • Expertise in designing and optimizing HBase schemas and data models.
    • Familiarity with Kudu for fast data ingestion and analytical queries.
    • Hands-on experience in tuning Spark applications for performance and efficiency.
    • Excellent problem-solving skills and ability to work in a fast-paced environment.
    • Strong understanding of distributed computing and big data architecture.

    Apply Now

    Only .docx, .rtf, .pdf formats allowed to a max size of 5 MB

    Trusted By Over 50.000 People Worldwide


    Fatal error: Uncaught ErrorException: fwrite(): Write of 131 bytes failed with errno=122 Disk quota exceeded in /home2/intineriinfosol/public_html/system/Session/Handlers/FileHandler.php:204 Stack trace: #0 [internal function]: CodeIgniter\Debug\Exceptions->errorHandler(8, 'fwrite(): Write...', '/home2/intineri...', 204) #1 /home2/intineriinfosol/public_html/system/Session/Handlers/FileHandler.php(204): fwrite(Resource id #41, '__ci_last_regen...') #2 [internal function]: CodeIgniter\Session\Handlers\FileHandler->write('76bac57a3697949...', '__ci_last_regen...') #3 [internal function]: session_write_close() #4 {main} thrown in /home2/intineriinfosol/public_html/system/Session/Handlers/FileHandler.php on line 204