Capgemini Big Hiring Drive 2025 - Junior Data Engineer Role
Akash 0 Comments

Capgemini Big Hiring Drive 2025 – Junior Data Engineer Role

Capgemini Big Hiring Drive 2025: Capgemini Engineering stands at the leading edge of innovation, harnessing data-driven insights to gas strategic enterprise growth. This dynamic employer seeks a passionate Junior Data Engineer with knowledge in Python and enthusiasm for leveraging Cognite Data Fusion to sign up for an ahead-of-the-curve team in Bengaluru, Karnataka.

Job Overview

  • Job Title: Junior Data Engineer
  • Location: Bengaluru, Karnataka, India (PAN India)
  • Company: Capgemini Engineering
  • Employment Type: Full-Time

The Junior Data Engineer position is a speciality of designing, building, and keeping scalable information pipelines and infrastructure. Candidates will play a pivotal role in optimising records workflows using Python and Cognite Data Fusion, ensuring seamless data integration and accessibility to empower data-driven decision-making across the company. This function offers a possibility to work on cutting-edge records engineering initiatives in a collaborative, innovative environment.

Key Responsibilities

  • Data Pipeline Development: Design and put into effect a quit-to-stop information pipeline to ingest, process, and transform massive-scale dependent and unstructured datasets with efficiency and scalability in mind.
  • Data Integration: Leverage Cognite Data Fusion to automate and scale contextualization of various statistics assets, ensuring seamless integration and accessibility for downstream packages.
  • Programming Excellence: Build robust ETL (Extract, Transform, Load) methods and statistics workflows the usage of Python, prioritising code nice, scalability, and maintainability.
  • Cross-Functional Collaboration: Partner with information scientists, analysts, and commercial enterprise stakeholders to comprehend information needs and supply tailored answers that force commercial enterprise fee.
  • Data Quality Assurance: Implement rigorous fact validation and nice assessments to ensure accuracy, consistency, and reliability of record outputs.
  • Documentation: Maintain clean, complete documentation of information tactics, workflows, and system architectures to assist transparency and scalability.

Required Qualifications

  • Bachelor’s diploma in Computer Science, Information Technology, Engineering, or an associated discipline.
  • Proficiency in Python programming for statistics processing and pipeline improvement.
  • Familiarity with statistics integration structures and tools.
  • Solid knowledge of data modelling, database design, and data warehousing principles.
  • Hands-on enjoy with SQL and relational database management.
  • Basic knowledge of cloud systems like AWS or Azure (favoured, however, no longer obligatory).
  • Strong problem-solving talents with an eager eye for detail.
  • Excellent communication and teamwork talents to thrive in a collaborative environment.

Preferred Qualifications

  • Experience with information pipeline and workflow management gear (e.g., Apache Airflow, Luigi).
  • Knowledge of large-scale facts technology and frameworks (e.g., Hadoop, Spark, Kafka).
  • Familiarity with data visualisation tools, with exposure to Grafana being a plus.

How to Apply

Interested candidates can put up their resumes and cover letters via the Capgemini Engineering careers portal. Highlight applicable experience in Python, fact pipelines, and any familiarity with Cognite Data Fusion or similar platforms. Applications are reviewed on a rolling basis, so early submissions are recommended.

Capgemini Big Hiring Drive 2025 – Apply Link
Join our Telegram group:- Click Here
Follow us on Instagram:- Click Here
Join our WhatsApp group:- Click Here
More Latest Off-Campus Hiring 2025 Jobs: