Software Engineer – Data Engineer, Big data, Hadoop (3 – 6 years)(Pune+128931)

Company Name: Amdocs Inc

Location: Pune, MH, IN

Job Duration: 2021-10-14 to 2021-11-13

Overview

Job ID: 128931 
Required Travel :Minimal 
Managerial – No
Location: India- Pune (Amdocs Site) 

Who are we?

If you’re a smartphone user then you are part of an ever more connected and digital world. At Amdocs, we are leading the digital revolution into the future. From virtualized telecommunications networks, Big Data and Internet of Things to mobile financial services, billing and operational support systems, we are continually evolving our business to help you become more connected. We make sure that when you watch a video on YouTube, message friends on Snapchat or send your images on Instagram, you get great service anytime, anywhere, and on any device. We are at the heart of the telecommunications industry working with giants such as AT&T, Vodafone, Telstra and Telefonica, helping them create an amazing new world for you where technology is being used in amazing new ways every single day.

In one sentence

Responsible for design, development, modification, debug and/or maintenance of software systems

What will your job look like?

  • · Design, Create, Enhance & Support Hadoop data pipelines for different domain using Big Data Technologies.
  • · In charge of Data Transformation, Data Models, Schemas, Metadata, And Workload Management
  • · Perform Development & Deployment tasks, should be able to Code, Unit Test & Deploy.
  • · Perform application analysis and propose technical solution for application enhancement, optimizing existing ETL processes etc.
  • · Handle and resolve Production Issues (Tier 2 & weekend support) & ensure SLAs are met.
  • · Create necessary documentation for all project deliverable phases
  • · Collaborate with multi discipline interfaces: dev team, business analysts, Infra, information security, end users.

All you need is…

  • · 1-3 years of Hadoop architecture, and other relevant Big data tools like HBase, HDFS, Hive, Map-reduce etc.
  • · Good communication and collaboration skills
  • · Independent and Self-learning attitude
  • · Experience in
  • · Hadoop platform (Ambari etc.)
  • · Design, build and manage data pipelines in Python and related technologies.
  • · Kafka message queuing technologies, Apache Nifi Stream Data Integration and RESTful APIs and open systems.
  • · Object-oriented/Object function scripting e.g., R, Python, Scala, or similar.
  • · Informatica BDM (Big Data Management) or other Big Data ETL tool
  • · Hands on in SQL, Unix & advanced Unix Shell Scripting.
  • · Familiarity in handling xml, json, structured, fixed-width, un-structured files using custom Pig/Hive

Good to have skills:

  • · Knowledge of any cloud technology (AWS/Azure/GCP)
  • · Experience working with Data Discovery, Analytics and BI software tools like Tableau, Power BI.
  • · Understanding/ experience with Data Virtualization tools like TIBCO DV, Denodo etc.
  • · Understanding/ experience with Data Governance tools like EDC Informatica Data Catalog etc.

Why you will love this job:

•    You will be challenged to design and develop new software applications.
•    You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth.
 

 Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce