What Jobs are available for Big Data Hadoop in Thailand?
Showing 239 Big Data Hadoop jobs in Thailand
Data Engineer/Senior Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities / หน้าที่ความรับผิดชอบ:
- Designing and building data pipelines: Creating automated workflows (pipelines) that efficiently ingest, transform, and load data from disparate sources into a central repository.
- Data warehousing and storage: Designing and managing databases, data lakes, and data warehouses to ensure data is stored in an organized, efficient, and accessible manner.
- Building data platforms: Building and maintaining the foundational platforms and tools that enable data scientists and analysts to perform their work effectively.
- Data quality and governance: Implementing processes to monitor and ensure the accuracy, consistency, and reliability of data.
- Performance optimization: Monitoring and tuning the data infrastructure to improve performance, reduce latency, and lower costs.
Qualifications / คุณสมบัติ:
- Bachelor, Master degree in Computer Science, Information Technology, Data Engineering.
- 5+ years of experience in data engineering or a related field.
- Experience with data warehouse and data lake architectures.
- Excellent problem-solving, communication and leadership skills.
- Experience with ETL/ETL tools and frameworks (eg. Oracle Data Integrator) is a plus.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
T.D. Software Co.,Ltd. กำลังเปิดรับสมัครตำแหน่ง งานสัญญาจ้าง/ชั่วคราวData Engineer ที่ เขต สาทร, กรุงเทพมหานคร สมัครเลยเพื่อเป็นส่วนหนึ่งของทีมเรา
สรุปงาน:
- มีเวลาการทำงานที่ยืดหยุ่น
- เงินเดือนที่คาดหวัง: จาก ฿60,000 - ฿85,000 ต่อ เดือน
Data Engineer
Employment Condition: 6-month contract with renewal
Work model: Hybrid (1-2 days/week)
Qualifications:
-3+ years of experience in data engineering or a similar role.
-Proven experience with Databricks, Unity Catalog, Apache Spark, and distributed data processing.
-Strong proficiency in Python, PySpark and SQL.
-Solid understanding of data warehousing concepts, data modeling, and performance optimization.
-Experience with Azure cloud data platforms and other component (e.g., Azure Synapse).
-Familiarity with CI/CD, version control (e.g., Git, BitBucket).
-Experience with Qlik to replicate the data from RDBMS to Azure Databricks and real-time data streaming.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
Responsibility
- Develop and maintain ETL processes to extract, transform, and load data from various sources to target data repositories.
- Work closely with the Data Architect to understand data models and ensure that ETL processes align with the overall data architecture.
- Monitor and maintain data infrastructure to ensure data quality and accuracy, including implementing data security and privacy policies.
- Provide technical support to end-users to help them access and utilize data effectively.
- Work collaboratively with other members of the data team to ensure the effective and efficient use of data within the organization.
- Develop and maintain documentation of ETL processes and data infrastructure
Qualification
- Strong SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working with a variety of databases.
- Experience with DBT (data build tool) for data transformation, testing, and documentation.
- Experience building and optimizing 'big data' data pipelines, architectures and data sets on Databricks platform.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Experience with object-oriented/object function scripting languages: Python, Java, Scala, C#, etc.
- Experience with Apache Airflow for workflow orchestration and pipeline monitoring.
- Experience in using Databricks, PySpark for big data processing.
- Previous work designing/managing data structures within a data warehouse.
- Experience in data stream processing and its processor is a plus.
- Experience developing application eg. web application, mobile application, REST web services is a plus.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a talented Data Engineer to join our team, responsible for managing and analyzing data to ensure the highest efficiency of our data systems.
Key Responsibilities1. Data Integration & Orchestration
- Design, implement, and manage orchestrated data pipelines using tools (e.g., drag-and-drop operators).
- Develop and maintain ODS/ADS models, data flows, APIs, and file-driven data delivery.
- Conduct data exploration and model orchestration aligned with business requirements.
- Create and maintain standardized data models and mappings.
- Implement and deliver APIs and structured data files.
- Ensure documentation and metadata integrity throughout data implementations.
- Define and enforce enterprise-wide data standards (naming, formats, metadata).
- Operate data quality services and validation rules.
- Support data asset cataloging and metadata governance.
- Execute and automate testing for:
o Data models
o Data integration flows
o File and API openness
o Data quality rules and compliance
- Maintain automated test scripts and document results.
- Routinely update the task plan and track delivery schedules to Project Lead.
- Verify the completeness and accuracy of input documents and report non-compliance.
- Deliver corresponding project outputs based on approved documents.
- Perform self-inspection of all remote deliverables before submission.
- Initiate the final acceptance process, ensuring alignment with agreed acceptance criteria.
- Regularly submit remote delivery progress reports.
- Education: Bachelor's degree or higher in Computer Science, Information Technology, Engineering, or a related field.
- Experience: A minimum of 1-5 years of experience in data management, data system development, or a related field.
- Programming Skills: Proficient in SQL and Python.
- Big Data Knowledge: Familiar with Kafka, Spark, and Flink technologies.
- Data Governance Knowledge: Possesses knowledge and understanding of data governance, data assets, and data standards.
- Good English communication skills are required. Proficiency in Chinese (HSK Level 4 or above) will be considered a plus.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
Posting Date: 9 Oct 2025
Job Function: IT and Digital technology
Company: Banpu Public Company Limited
Location:
Thailand
Responsibilities
Job Summary:
- Design, architect, and implement next generation system architecture and automation solutions in Cloud environments.
- Implement, maintain, and improve Continuous Integration and Continuous Delivery environments.
- Own and lead initiatives to define, design, and implement DevOps solutions which include reference architectures, estimates, and costing.
- Provide technical leadership, project guidance, and business development in various technology areas: API, Message Queue
- Advise business and technology delivery leadership on how to translate the client's infrastructure and automation of business requirements into executable technology solutions.
- Participate in workshops and provide presentations of the proposed solution.
- Act as a subject matter expert on Cloud/DevOps best practices with Terraform, Cloud Formation, Auto Scaling Groups, and Configuration Management.
- Perform the best analysis practices and emerging concepts in Cloud/DevOps, Infrastructure Automation, and Enterprise Security.
- Act as a technical liaison between users, data scientists, AI developers, service engineering teams and support.
- Review and audit of existing solutions, design, and system architecture.
- Profiling and troubleshooting existing solutions.
- Create technical documentation.
Qualifications
- Bachelor's degree in computer science, Computer Engineering, or a related field.
- Minimum 5 years of Experience with automated deployment, continuous integration, and release engineering tools (Nagios, Zabbix, Cacti, New Relic, Graphite) or related field.
- Good knowledge of Agile environments.
- Experience building global and scalable platforms.
- Broad knowledge of software development and software testing methodologies along with change and configuration management practices in Linux based environments.
- Strong scripting skills (Python, Ruby, or Gravel).
- Strong knowledge of infrastructure automation tools (Puppet, Chef, Ansible), in Cloud and DevOps solution delivery and strategy.
- Practical expertise in performance tuning and optimization, bottleneck problems analysis.
- Solid technical expertise and troubleshooting skills.
- Good command of English communication (TOEIC score 700 and above).
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
Design and implement the methods for storing and retrieving the data and monitoring the data pipelines, starting from the ingestion of raw data sources, transforming, cleaning and storing, enrich the data for promptly using for both of structured and unstructured data, and working with the data lake and cloud which is the big data technology.
Develop data pipeline automation using Azure technologies, Databricks and Data Factory
Understand data, reports and dashboards requirements, develop data visualization using Power BI, Tableau by working across workstreams to support data requirements including reports and dashboards and collaborate with data scientists, data analyst, data governance team or business stakeholders on several projects
Analyze and perform data profiling to understand data patterns following Data Quality and Data Management processes
Qualifications
3 years+ experience in big data technology, data engineering, data analytic application system development.
Have an experience of unstructured data for business Intelligence or computer science would be advantage.
Technical skill in SQL, UNIX and Shell Script, Python, R Programming, Spark, Hadoop programming.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
- Design, develop, and maintain robust and scalable data pipelines using tools such as Apache Airflow, PySpark, and cloud-native services (e.g., Azure Data Factory, Microsoft Fabric Pipelines).
- Manage data ingestion from APIs, files, and databases into data lakes or data warehouses (e.g., Microsoft Fabric Lakehouse, Iceberg, DWS).
- Ensure seamless data integration across on-premise, cloud, and hybrid environments.
- Implement data validation, standardization, and transformation to ensure high data quality.
- Apply data encryption, masking, and compliance controls to maintain security and privacy standards.
AI & Intelligent Automation
- Collaborate with Data Scientists to deploy ML models and integrate predictive insights into production pipelines (e.g., using Azure Machine Learning or Fabric Notebooks).
- Support AI-powered automation and data insight generation through tools like Microsoft Co-pilot Studio or LLM-powered interfaces (chat-to-data).
- Assist in building lightweight AI chatbots or agents that leverage existing datasets to enhance business efficiency.
- 3–5+ years of experience in Data Engineering or AI Engineering roles.
- Proficiency in Python, SQL, and big data frameworks (Apache Airflow, Spark, PySpark).
- Experience with cloud platforms: Azure, Huawei Cloud, or AWS.
- Familiar with Microsoft Fabric services: OneLake, Lakehouse, Notebooks, Pipelines, and Real-Time Analytics.
- Hands-on with Microsoft Co-pilot Studio to design chatbots, agents, or LLM-based solutions.
- Experience in ML model deployment using Azure ML, ModelArts, or similar platforms.
- Understanding of vector databases (e.g., Qdrant), LLM orchestration (e.g., LangChain), and prompt engineering is a plus.
Is this job a match or a miss?
Be The First To Know
About the latest Big data hadoop Jobs in Thailand !
Data Engineer
Posted today
Job Viewed
Job Description
Location: Onsite Ratchayothin, Sathorn
Contract Duration: 6 months
Experience Required: 2–3 years
Job Description:
We are looking for a Data Engineer to join our team under a 6-month contract.
The candidate will be responsible for data development and migration tasks, working closely with the data analytics and engineering teams to ensure smooth data integration and transformation.
Key Responsibilities:
- Design, develop, and maintain data pipelines and ETL processes.
- Perform data migration, transformation, and integration between systems.
- Work with structured data in SQL databases such as Microsoft SQL Server or DB2.
- Write efficient Python and SQL scripts for data extraction, transformation, and automation.
- Collaborate with cross-functional teams to ensure data quality and accuracy.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- 2–3 years of experience as a Data Engineer or in a similar role.
- Strong hands-on experience with Python and SQL.
- Proficiency in Microsoft SQL Server (MSSQL) is required.
- Experience with DB2 is a plus.
- Familiarity with ETL tools or frameworks and data migration projects is preferred.
- Strong analytical and problem-solving skills.
- Ability to work both independently and collaboratively in a fast-paced environment.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
Job overview
Design and implement the methods for storing and retrieving the data and monitoring the data pipelines, starting from the ingestion of raw data sources, transforming, cleaning and storing, enrich the data for promptly using for both of structured and unstructured data, and working with the data lake and cloud which is the big data technology.
Responsibilities
- Develop data pipeline automation using Azure technologies, Databricks and Data Factory
- Understand data, reports and dashboards requirements, develop data visualization using Power BI, Tableau by working across workstreams to support data requirements including reports and dashboards and collaborate with data scientists, data analyst, data governance team or business stakeholders on several projects
- Analyze and perform data profiling to understand data patterns following Data Quality and Data Management processes
Qualifications
- 3 years+' experience in big data technology, data engineering, data analytic application system development.
- Have an experience of unstructured data for business Intelligence or computer science would be advantage.
- Technical skill in SQL, UNIX and Shell Script, Python, R Programming, Spark, Hadoop programming.
Is this job a match or a miss?
Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
• Develop, maintain, and optimize data pipelines and workflows.
• Extract, transform, and load (ETL) data from various sources into databases or data
warehouses.
• Support business teams by preparing datasets for analysis and reporting.
• Collaborate with analysts and stakeholders to understand business rules and implement
logic accordingly.
• Perform data validation and ensure data quality and consistency.
• Provide technical support for data-related issues.
• Automate routine data processing tasks to improve efficiency.
Qualification:
• SQL & Database: Basic to intermediate SQL knowledge — able to perform joins,
filters, and aggregations.
• Linux: Familiar with basic Linux commands and shell operations.
• Programming: Basic programming experience (Python preferred, or other scripting
languages).
• Excel: Capable of using Excel for data analysis and reporting.
• Analytical Thinking: Fast learner with strong problem-solving and logical thinking
skills.
• Business Understanding: Ability to understand and follow customer business rules and
translate them into data logic.
Is this job a match or a miss?