Site logo
Applications have closed

### Responsibilitiesnn- Create and maintain optimal data pipeline architecturen- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.n- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP ‘big data’ technologies.n- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.n- Work with data and analytics experts to strive for greater functionality in our data systems.nn### Requirements:nn- Excellent working SQL knowledge and experience working with relational databases.n- Experience with relational SQL and NoSQL databases.n- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.n- Experience with CGP cloud service: GCS, BQ, Google Cloud Dataprocn- Experience with big data tools: Hadoop, Spark, Kafka, etcn- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. nnPlease mention the word BECKONS when applying to show you read the job post completely (#RMi41Ny4yOS4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they’re human.nn nn#Salary and compensationn $80,000 $120,000/yearn nn#LocationnWorldwide

Tagged as: Java, Python, SQL

Print Job Listing
FullyRemoteJobs.IO

Sign in

Sign Up

Forgot Password

Cart

Cart

Share