Back to Jobs

Software Engineer-1715

Remote, USA Full-time Posted 2025-11-24
About the position FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we’re making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job Summary Job Description DUTIES: Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team; perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; process data using Python and Shell scripts; optimize performance using Java Virtual Machine (JVM); architect and integrate data using Delta Lake and Apache Iceberg; automate the deployment, scaling, and management of containerized applications using Kubernetes; develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitor applications and platforms using Datadog and Grafana; store and query relational data using MySQL and Presto; support applications under development and customize current applications; assist with the software update process for existing applications, and roll-outs of software releases; analyze, test, and assist with the integration of new applications; document all development activity; research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports; monitor and evaluate competitive applications and products; review literature, patents, and current practices relevant to the solution of assigned projects; collaborate with project stakeholders to identify product and technical requirements; conduct analysis to determine integration needs; perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements; and build CI/CD pipelines to automate the quality assurance process and minimize manual errors. Position is eligible to work remotely one or more days per week, per company policy. REQUIREMENTS: Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; processing data using Python and Shell scripts; developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitoring applications and platforms using Datadog and Grafana; and storing and querying relational data using MySQL and Presto; of which one (1) year includes optimizing performance using Java Virtual Machine (JVM); architecting and integrating data using Delta Lake and Apache Iceberg; and automating the deployment, scaling, and management of containerized applications using Kubernetes Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Responsibilities • Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team • Perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Process data using Python and Shell scripts • Optimize performance using Java Virtual Machine (JVM) • Architect and integrate data using Delta Lake and Apache Iceberg • Automate the deployment, scaling, and management of containerized applications using Kubernetes • Develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitor applications and platforms using Datadog and Grafana • Store and query relational data using MySQL and Presto • Support applications under development and customize current applications • Assist with the software update process for existing applications, and roll-outs of software releases • Analyze, test, and assist with the integration of new applications • Document all development activity • Research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports • Monitor and evaluate competitive applications and products • Review literature, patents, and current practices relevant to the solution of assigned projects • Collaborate with project stakeholders to identify product and technical requirements • Conduct analysis to determine integration needs • Perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements • Build CI/CD pipelines to automate the quality assurance process and minimize manual errors Requirements • Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Processing data using Python and Shell scripts • Developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitoring applications and platforms using Datadog and Grafana • Storing and querying relational data using MySQL and Presto • Of which one (1) year includes optimizing performance using Java Virtual Machine (JVM) • Architecting and integrating data using Delta Lake and Apache Iceberg • Automating the deployment, scaling, and management of containerized applications using Kubernetes Apply tot his job Apply To this Job

Similar Jobs

[Remote] E-commerce Product Manager (Contract)

Remote, USA Full-time

Experienced Customer Service Representative – Remote Full-Time Opportunity for Excellent Communicators and Problem-Solvers

Remote, USA Full-time

SQL Developer

Remote, USA Full-time

AI Engineer Intern

Remote, USA Full-time

AI-Based Cybersecurity Research Intern

Remote, USA Full-time

[Remote] Generative AI Annotation Operations Engineer

Remote, USA Full-time

[Remote] ServiceNow Developer – API & Workflow Automation

Remote, USA Full-time

Data Science and Analytics Senior Manager (Virtual)

Remote, USA Full-time

[Remote] 5G RAN Systems Engineer

Remote, USA Full-time

Remote Real Estate Transaction Lead: From to Close

Remote, USA Full-time

Experienced Remote Data Entry Specialist – Online E-commerce Data Management and Administration at arenaflex

Remote, USA Full-time

Academic Coach job at Rutgers University in Newark, NJ, Blackwood, NJ

Remote, USA Full-time

Experienced Online Customer Support Representative – Full Time Remote Opportunity with arenaflex

Remote, USA Full-time

Utilization Review Nurse, RN

Remote, USA Full-time

State Licensed Legal Project Manager

Remote, USA Full-time

Urgently Need Part Time Learning Specialist/Academic Coach in Centre County, PA

Remote, USA Full-time

**Experienced Administrative Assistant/Customer Service Representative – Global Remote Team**

Remote, USA Full-time

AI/ML Solution Architect - Remote / Telecommute

Remote, USA Full-time

Senior Accountant & Sage Intacct Advisory Leader; Remote

Remote, USA Full-time

Legal Operations and Contract Manager - Remote US

Remote, USA Full-time