Staff Security Engineer
Sales
Login to bookmark
Bookmark Details
nWhat You’ll DonnAs a Senior Data Engineer on the data platform team, well rely on your expertise across multiple disciplines to develop, deploy and support data systems, data pipelines, data lakes, and lakehouses. Your ability to automate, performance tune, and scale the data platform will be key to your success.nnYour initial areas of focus will include:nnnCollaborate with stakeholders to make effective use of core data assetsnnWith Spark and Pyspark libraries, load both streaming and batched datannEngineer lakehouse models to support defined data patterns and use casesnnLeverage a combination of tools, engines, libraries, and code to build scalable data pipelinesnnWork within an IT managed AWS account and VPC to stand up and maintain data platform development, staging, and production environmentsnnDocumentation of data pipelines, cloud infrastructure, and standard operating proceduresnnExpress data platform cloud infrastructure, services, and configuration as codennAutomate load, scaling, and performance testing of data platform pipelines and infrastructurennMonitor, operate, and optimize data pipelines and distributed applicationsnnHelp ensure appropriate data privacy and securitynnAutomate continuous upgrades and testing of data platform infrastructure and servicesnnBuild data pipeline unit, integration, quality, and performance testsnnParticipate in peer code reviews, code approvals, and pull requestsnnIdentify, recommend, and implement opportunities for improvement in efficiency, resilience, scale, security, and performancennnnn nnWhat You Need to Get the Job Done (if you dont have all, apply anyway!)nnnExperience developing, scaling, and tuning data pipelines in Spark with PySparknnUnderstanding of data lake, lakehouse, and data warehouse systems, and related technologiesnnKnowledge and understanding of data formats, data patterns, models, and methodologiesnnExperience storing data objects in hadoop or hadoop like environments such as S3nnnnnnDemonstrated ability to deploy, configure, secure, performance tune, and scale EMR and Spark nnExperience working with streaming technologies such as Kafka and KinesisnnExperience with the administration, configuration, performance tuning, and security of database engines like Snowflake, Databricks, Redshift, Vertica, or GreenplumnnAbility to work with cloud infrastructure including resource scaling, S3, RDS, IAM, security groups, AMIs, cloudwatch, cloudtrail, and secrets managernnUnderstanding of security around cloud infrastructure and data systemsnnGit-based team coding workflowsnnnnn nnBonus Skills (Not Required, So Apply Anyway!)nnnExperience deploying and implementing lakehouse technologies such as Hudi, Iceberg, and DeltannExperience with Flink, Presto, Dremio, Databricks, or KubernetesnnExperience with expressing infrastructure as code leveraging tools like TerraformnnExperience and understanding of a zero trust security frameworknnExperience developing CI/CD pipelines for automated testing and code deploymentnnExperience with QA and test automationnnExposure to visualization tools like Tableaunnnnn nnBeyond the technical skills, were looking for individuals who are:nnnClear communicators with team members and stakeholdersnnAnalytical and perceptive of patternsnnCreative in codingnnDetail-oriented and persistentnnProductive in a dynamic settingnnnnnIf you love to learn, youll be in good company. Youll likely have a Bachelors degree in computer science, information systems, or equivalent working experience. nn#Salary and compensationn No salary data published by company so we estimated salary based on similar jobs related to Education, Sales, Accounting, Finance, Senior, Design, Scrum, Mobile, Junior, Engineer, Illustrator, Digital Nomad, SaaS, Marketing, Testing and Cloud jobs that are similar:nn $62,500 $120,000/yearn nn#LocationnCharleston, South Carolina, United States
Share
Facebook
X
LinkedIn
Telegram
Tumblr
Whatsapp
VK
Mail