Software Data Engineer (100% Remote)
Simply Analytics is a powerful spatial analytics and data visualization application used by thousands of business, marketing, and social science researchers in the United States and Canada. It comes pre-packaged with 200,000+ data variables and allows our users to create maps, charts, tabular reports, and crosstabs. We are passionate about creating outstanding software, and we believe in test driven development, continuous integration, and code review.
As a small company, each of our developers has an important role to play – at Simply Analytics, you are not just another cog in the wheel, you are an integral member of our team. You will be working on valuable features and making key decisions that impact the direction of the application and the satisfaction of our users.
We’re looking for a Software Data Engineer to manage our existing data workflows, develop and maintain new ETL pipelines, and conduct data related QA. You will be creating and maintaining production-quality in-house tools within a large shared code base, and the data you curate will be used by thousands of university students, researchers, and marketing professionals.
The ideal candidate is a self-starter, has a high level of attention to detail, is comfortable asking questions, enjoys working with talented colleagues, and has an interest in analytics and data visualization.
This is a 100% remote position, our developers can live and work anywhere in Canada. This is a full-time salaried position.
- Design, develop, and test features
- Write high-quality, clean, scalable, maintainable code (e.g., PEP 8, PEP 484)
- Contribute ideas for new features or improvements to existing features
- Assist colleagues through code-review, collaboration, and troubleshooting
- 5+ years of professional software development work experience
- 3+ years of experience working with large Python codebases
- Comfortable using Linux CLI
- Data warehousing experience with PostgreSQL
- Advanced relational database and data manipulation skills
- Experience with data orchestration platforms (Dagster, Airflow, or Prefect)
- Ability to maintain our full data processing stack, primarily in Python
- Experience using AWS services
- Experience with development on large OOP software projects
- Experience with big data engines and platforms (e.g., Apache Spark, Hadoop, Trino)