Data/AI Engineer | Axrail
top of page
white.png
Cloud Data Engineer.png

Data Engineer

Kuala Lumpur

Overview

Transform Data into Intelligence with AWS & AI Innovation

At Axrail, an AWS Premier Partner in Malaysia, we're bridging data engineering with cutting-edge AI to solve real-world challenges. We're looking for a Data Engineer who thrives on building scalable data pipelines.

Whether you're a fresh graduate with a passion for data or an experienced engineer ready to build data pipelines, this role lets you work with PySpark in a hybrid environment.

Join us to shape the future of data-powered decision-making!

Key Responsibilities

📊 Data Engineering Excellence

  • Develop and optimize ETL pipelines for structured/unstructured data using PySpark, Glue, and Redshift.

  • Ensure data integrity, security, and scalability across all pipelines.

📈 Dashboard & Visualization

  • Develop interactive dashboards (using QuickSight or Tableau) to visualize complex datasets.

  • Collaborate with stakeholders to translate data into business insights.

⚡ Performance Optimization

  • Fine-tune PySpark jobs, SQL queries for cost/performance efficiency.

  • Proactively monitor and resolve data pipeline bottlenecks.

☁️ AWS & Cloud-Native Development

  • Deploy and manage serverless data workflows (Lambda, Step Functions).

  • Stay ahead of AWS’s latest data/AI services (e.g., Q, SageMaker new features)

🌟 Key Attributes for Success

  • Problem-Solver: You debug data chaos and optimize pipelines like a pro.

  • AI Explorer: You’re curious about GenAI’s role in data engineering (e.g., synthetic data generation).

  • Visual Storyteller: You turn complex data into clear, impactful visualizations.

  • Team Player: You thrive in cross-functional teams (data scientists, business analysts, engineers).

🎓 Qualifications

  • Bachelor’s degree in Computer Science, Data Science, or related fields (or equivalent experience).

  • Experienced candidates (3+ years) in building and fine-tuning data pipelines.

  • Fresh graduates with prior hands-on data/AI projects (academic or personal) are welcome to apply!

What You Bring

🛠️ Technical Skills

  • PySpark & AWS Glue: For large-scale ETL and data processing.

  • AI/ML Frameworks: TensorFlow, PyTorch, Scikit-learn.(Nice to have)

  • Cloud Data Tools: Redshift, S3, EMR, SageMaker.

  • Dashboarding: QuickSight, Tableau, or Power BI.

  • Bonus:

    • AWS Certifications (Data Analytics, Machine Learning Specialty).

    • Generative AI experience (e.g., prompt engineering, LLM fine-tuning).

Benefits

💵 Competitive salaries

💼 Career growth opportunities

⏱️ Flexible working hour​

🚀 Attractive benefits

✨ If this sounds interesting, we would love to hear from you. Please include whatever info you believe is relevant: resume, GitHub profile, code samples, links to personal projects, etc.

bottom of page