Data Engineer – N99 Program

Application deadline date has been passed for this Job.
This job has been Expired
Full Time
  • Post Date: November 10, 2021
Job Description

• The customer is a leading information and communications technology (ICT) and communications engineering services provider across the Asia-Pacific region. We are headquartered in Singapore and a wholly owned subsidiary of the customer’s Group. We have in-depth domain knowledge and unique capabilities that create business value for customers. We offer a broad range of services, including consulting, systems development and integration, business process outsourcing, infrastructure management and solutions, and technology solutions.

What is the opportunity?
• The customer is looking for a data engineer to operationalize the data integration and management process –to ingest data from numerous data sources and apply transformations for data quality and insights. The data engineer will work closely with the business users, project managers, technical teams like database engineers and source system data owners to develop data pipelines to automate data acquisition and cleansing, to sustain analytics and AI initiatives.
• The role requires the ability to translate business and technical requirements into data interfaces, data transformation jobs and design data models that powers self-service analytics or AI projects. To support this, you may also need to establish data management processes such as data governance, data cataloguing, security/privacy classification and advise our clients on the collection, storage and consumption of information in their organizations.

What will you do?
• Design and implement relevant data models in the form of data marts stored in Operational Data Stores, Data Warehouses or Big Data platforms
• Build data pipelines to bring information from source systems, harmonise and cleanse data to support analytics initiatives for core business metrics and performance trends.
• Perform data profiling to understand data quality and advise practical measures to address such data issues through data transformation and data loading
• Dive into company data to identify sources and features that will drive business objectives.
• Work closely with project manager and technical leads to provide regular status reporting and support them to refine issues/problem statements and propose/evaluate relevant analytics solutions
• Bring your experience and ideas to effective and innovative engineering, design and strategy
• Work in interdisciplinary teams that combine technical, business and data science competencies that deliver work in waterfall or agile software development lifecycle methodologies

The range of accountability, responsibility and autonomy will depend on your experience and seniority, including:
• Contributing to our internal networks and special interest groups
• Mentoring to upskill peers and juniors

What do you need to succeed?
• Possess good communications skills to understand our customers’ core business objectives and build end-to-end data centric solutions to address them
• Good critical thinking and problem-solving abilities

Must have:
• Prior experience building large scale enterprise data pipelines using commercial and/or open source data management tools from vendors such as Informatica, Talend, Microsoft, IBM or Oracle
• Strong knowledge of data manipulation languages such as SQL necessary to build and maintain complex queries and data pipelines
• Practical appreciation of data quality metrics and remediation strategies
• Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalization, and dimensional data modelling such as OLAP
• Undergraduate or graduate degree in Computer science or equivalent

Nice to have:
• Experience with other aspects of data management such as data governance, metadata management, archival, data lifecycle management
• Processing of semi-structured and unstructured data sets such as NoSQL, graph and Hadoop based data storage technologies such as MongoDB, Cassandra, HBase, Hortonworks/Cloudera, Elastic Search and Neo4j using Spark, Splunk or Apache Nifi for batch or streaming data
• Large scale data loading experience moving enterprise or operational data from source systems
to new applications or data analytics solutions

Experience in leveraging on cloud-based data analytics platform such as:
• AWS serverless architecture in Lambda on AWS DynamoDB, EMR Redshift
• Azure Data Factory or SQL Data Warehouse
• GCP BigQuery/BigTable, Cloud Dataprep/Dataflow/Dataproc

• Competitive salary. Bonus based on performance.
• Young and dynamic working environment.
• Continuous development of hard and soft skills through work and professional trainings.
• Opportunity to approach newest technology trends
• Exciting leisure: sport and art events (football club, family day…)
• Company’s labor policy completely pursuant to Vietnamese labor legislation plus other benefits offered by the company (Company trip, Holiday, etc.)

Interested candidates should click Apply button to submit a completed Curriculum Vitae/ Resume and Cover Letter to: Recruitment Department – FPT Software Workforce Development
• Contact Person : Mrs. Nguyen Duc Viet Thanh
Email : [email protected]