Hiring For Big data Engineer Hyderabad Full Time

Black Knight India Solutions Private Limited

5-7yrs

69 days ago

Hyderabad

Primary Skills : big data.

Secondary Skills : big data.

No. of Positions : 10

Salary Range : 6-8 lac

Job Description

Dear Candidate,

 

Greetings!

 

We are looking for a Big Data Engineer who has Strong Experience with AWS ecosystem (Redshift, RDS, EMR, Kinesis, S3, Data pipeline, Glue, Athena, EC2, Lambda, etc.) .

 

 

Please find the below JD & If you are interested send the below details with your updated resume to pavan.juluri@bkfs.com

 

 

Overall Exp :

Relevant Exp in Big Data:

Notice Period:

Current CTC :

Expected CTC:

Current Location:

Preferred Location:

 

 

The Role:

 

As a Sr. Data Engineer/Developer in the Data Engineering team you will join a team of brilliant, friendly and energetic solutions architects, developers, QA engineers and project managers who strive to deliver best of breed custom solutions to our customers.

 

Are you someone that can thrive in a high energy, high growth, fast paced environment? Then you might be just who we are looking for.

 

Key Responsibilities:

  • Responsible for real-time aggregation of large amounts of data from various sources that includes streaming and batch data into a data lake.
  • Define data integration flows (ETL, ELT, stream processing, events/event processors, etc.)
  • Work alongside development team to build data management platforms using Amazon Redshift, Amazon Elastic Map Reduce (EMR), Amazon Athena, AWS Glue, AWS Lake Formation, Data pipeline, Glue and other related services.
  • Designing, implementing and supporting a platform that can provide ad-hoc access to large datasets
  • Strong Experience with data bases, including relational and non-relational.
  • Strong Experience with AWS Big data processing tools
  • Strong experience with scripting / programming languages such as: JavaScript, Java/Scala, .NET, PHP, Python and Node.JS).
  • Good to have DevOps experience with CI/CD pipelines, Configuration Management tools, Scripting
  • Must have exposure to search tools implementations(Elastic Search stack or similar)
  • Nice to have Cloudera, Hadoop development and admin experience
  • Good understanding of Windows, Unix/Linux operating systems and networking.
  • Collaborate with developers and operations teams to build and deploy custom solutions in AWS.
  • Good understanding of micro-services, web-based applications and REST APIs.
  • Prior experience managing production cloud infrastructure at scale (AWS)
  • Strong organization skills with high attention to detail.
  • Able to work independently with minimal supervision.
  • Excellent communication skills – written, verbal, presentation and interpersonal.
  • Willing to learn new skills and implement new technologies.
  • Must have the ability to be on call.
  • Bachelor’s degree in computer science, information systems, or other related field or equivalent work experience.

 

 

Job Description

Dear Candidate,

 

Greetings!

 

We are looking for a Big Data Engineer who has Strong Experience with AWS ecosystem (Redshift, RDS, EMR, Kinesis, S3, Data pipeline, Glue, Athena, EC2, Lambda, etc.) .

 

 

Please find the below JD & If you are interested send the below details with your updated resume to pavan.juluri@bkfs.com

 

 

Overall Exp :

Relevant Exp in Big Data:

Notice Period:

Current CTC :

Expected CTC:

Current Location:

Preferred Location:

 

 

The Role:

 

As a Sr. Data Engineer/Developer in the Data Engineering team you will join a team of brilliant, friendly and energetic solutions architects, developers, QA engineers and project managers who strive to deliver best of breed custom solutions to our customers.

 

Are you someone that can thrive in a high energy, high growth, fast paced environment? Then you might be just who we are looking for.

 

Key Responsibilities:

  • Responsible for real-time aggregation of large amounts of data from various sources that includes streaming and batch data into a data lake.
  • Define data integration flows (ETL, ELT, stream processing, events/event processors, etc.)
  • Work alongside development team to build data management platforms using Amazon Redshift, Amazon Elastic Map Reduce (EMR), Amazon Athena, AWS Glue, AWS Lake Formation, Data pipeline, Glue and other related services.
  • Designing, implementing and supporting a platform that can provide ad-hoc access to large datasets
  • Strong Experience with data bases, including relational and non-relational.
  • Strong Experience with AWS Big data processing tools
  • Strong experience with scripting / programming languages such as: JavaScript, Java/Scala, .NET, PHP, Python and Node.JS).
  • Good to have DevOps experience with CI/CD pipelines, Configuration Management tools, Scripting
  • Must have exposure to search tools implementations(Elastic Search stack or similar)
  • Nice to have Cloudera, Hadoop development and admin experience
  • Good understanding of Windows, Unix/Linux operating systems and networking.
  • Collaborate with developers and operations teams to build and deploy custom solutions in AWS.
  • Good understanding of micro-services, web-based applications and REST APIs.
  • Prior experience managing production cloud infrastructure at scale (AWS)
  • Strong organization skills with high attention to detail.
  • Able to work independently with minimal supervision.
  • Excellent communication skills – written, verbal, presentation and interpersonal.
  • Willing to learn new skills and implement new technologies.
  • Must have the ability to be on call.
  • Bachelor’s degree in computer science, information systems, or other related field or equivalent work experience.

 

 

Relevant Skill Jobs

Aizant Global Analytics Pvt Ltd

Hyderabad

Mumbai, Bengaluru

CAREER GLOBE CONSULTANT AND CORPORATE SOLUTIONS

Pune

phenom Infotech

Delhi NCR

Latest Hiring For Big data Engineer Hyderabad Jobs