Detailed Job Description
JOB DESCRIPTION
The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up as well to be responsible for expanding and optimizing our data and data pipeline architecture and optimizing data flow and collection for cross-functional teams. The Data Engineer will support our software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Candidates must be experienced in using the python programming language and knowledgeable in using some of the AWS Services for Big Data Processing mentioned below.

TECHNOLOGY STACK:
AWS
EC2
Lambda
RedshifT
RDS
SQS
Cloudwatch.
Athena
Batch (Optional)
Glue
Step Functions (Optional)
S3

PROGRAMMING LANGUAGE
Python
BashShell Scripting
PostgreSQLSQL
Basic Knowledge of Machine Learning Development (Optional)

RESPONSIBILITIES:
Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.

QUALIFICATIONS:
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
A successful history of manipulating, processing, and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
They should also have experience using the following softwaretools:

Experience with big data tools: Hadoop, Spark, etc.
Experience with relational SQL and NoSQL databases(optional), including Postgres and Cassandra(optional).
Experience with AWS cloud services: EC2, RDS, Redshift, Lambda, Athena, Cloudwatch.
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-orientedobject function scripting languages: Python, Bash, etc.


Other Skills
,


Keywords
Data Engineer (Phyton),


Interview Information
Job Location : Philippines
Interview Location : Philippines
Contact Person Name : Angelika Baniqued
Contact Number : 9171503036

Company Profile
Company Overview Talentium is a Philippine-based I.T. service and consulting company that started out as an I.T. staff provider, with only 3 employees in its Ortigas Office. It grew quickly over the years and now has two additional business units, namely Apps Development, and Infrastructure Solutions, with over 60 employees. With recent reliable source funding and an increase in clients, Talentium has become more aggressive and is set to make waves in the I.T. world. The Talentium of the 21st century was founded in 1997 as a dynamic I.T. consulting company called Decimal Solutions. It was known for providing its clients with comprehensive yet practical solutions. In 2008, the company took on a new name to reflect its core expertise – Applications Development. The Talentium portfolio of services and capabilities includes: Custom applications development in Ruby on Rails and Java Staffing of Highly Technical Resources Implementing Oracle E-Business Suite Providing managed I.T. services to small and medium enterprises Web and Mobile Apps Development How we make it work: Strategic and Simple Solutions Strategic We make sure that the choice of technology is consistent with a client’s business strategy and goals. We are able to provide quick wins, although we are in for the long haul. Simple We strive for simplicity in our architecture, designs, and interfaces. We hide the complexity of technology, so the users can do what they do best. Solutions We make sure that what we build actually solves your concerns. We have inquiring minds and a passion to find the answers to your questions.