Nilasu Consulting Services Pvt Ltd logo

Data Engineer(NCS/Job/ 1457)

For Data And Artificial Intelligence Company

5 - 10 Years

Full Time

Immediate

Up to 19 LPA

1 Position(s)

Bangalore / Bengaluru

5 - 10 Years

Full Time

Immediate

Up to 19 LPA

1 Position(s)

Bangalore / Bengaluru

Job Skills

Job Description

Skills and Qualifications:

  • 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack
  • 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL
  • Experience in writing and interpreting SQL queries, designing data models and data standards.
  • Experience in SQL Server databases, Oracle and/or cloud databases.
  • Experience in data warehousing and data mart, Star and Snowflake model.
  • Experience in loading data into database from databases and files.
  • Experience in analyzing and drawing design conclusions from data profiling results.
  • Understanding business process and relationship of systems and applications.
  • Must be comfortable conversing with the end-users.
  • Must have ability to manage multiple projects/clients simultaneously.
  • Excellent analytical, verbal and communication skills.

 

 

 

Role and Responsibilities:

  • Work with business stakeholders and build data solutions to address analytical & reporting requirements.
  • Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements.
  • Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow
  • Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation
  • Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases.
  • Conduct root cause analysis and resolve production problems and data issues.
  • Create and maintain up to date documentation of the data model, data flow and field level mappings.
  • Provide support for production problems and daily batch processing.
  • Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance