Posted on: 23/07/2025
About The Role :
- Data Migration From Hive/other DBs to Salesforce/other DBs and vice versa.
- Data Modeling Understand existing sources & data models and identify the gaps and building future state
architecture.
- Data Pipelines Building Data Pipelines for several Data Mart/Data Warehouse and Reporting requirements.
- Data Governance Build the framework for DG & Data Quality Profiling & Reporting.
What The Candidate Will Need / Bonus Points :
What the Candidate Will Do :
- Demonstrate strong knowledge of and ability to operationalize, leading data technologies and best practices.
- Build dimensional data models to support business requirements and reporting needs.
- Design, build and automate the deployment of data pipelines and applications to support reporting and data requirements.
- Research and recommend technologies and processes to support rapid scale and future state growth initiatives from the data front.
- Prioritize business needs, leadership questions, and ad-hoc requests for on-time delivery.
- Collaborate on architecture and technical design discussions to identify and evaluate high impact process initiatives.
- Work with the team to implement data governance, access control and identify and reduce security risks.
- Perform and participate in code reviews, peer inspections and technical design/specifications.
- Develop performance metrics to establish process success and work cross-functionally to consistently and accurately measure success over time.
- Delivers measurable business process improvements while re-engineering key processes and capabilities and maps to future-state vision.
- Prepare documentations and specifications on detailed design.
- Be able to work in a globally distributed team in an Agile/Scrum approach.
Basic Qualifications :
- 8+ years professional software development experience, including experience in the Data Engineering & Architecture space.
- Interact with product managers, and business stakeholders to understand data needs and help build data
infrastructure that scales across the company.
- Very strong SQL skills know advanced level SQL coding (windows functions, CTEs, dynamic variables,
Hierarchical queries, Materialized views etc).
- Experience with data-driven architecture and systems design knowledge of Hadoop related technologies such as HDFS, Apache Spark, Apache Flink, Hive, and Presto.
- Good hands on experience with Object Oriented programming languages like Python.
- Proven experience in large-scale distributed storage and database systems (SQL or NoSQL, e. HIVE, MySQL, Cassandra) and data warehousing architecture and data modeling.
- Working experience in cloud technologies like GCP, AWS, Azure.
- Knowledge of reporting tools like Tableau and/or other BI tools.
Preferred Qualifications :
- Working experience in cloud technologies like GCP, AWS, Azure.
I am inspired by Uber's mission to set the world in motion and build opportunities by enabling mobility. The culture at Uber is all about doing the right thing, period. This is very powerful and inspiring. It motivates me to bring my best self to work and to be sharply focused on how my team can help build customer delight. I am constantly enthused by the high energy, innovation and commitment of ...
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1517505
Interview Questions for you
View All