June 10th, 2021 

10AM – 4:30PM BST 

Ai+ Professionals Expo LIVE EVENT!

Create Profile
Attend Career Lab
Post a Job for Free

Enhance your Career at Ai+

Positions Available

Our Hiring Partners have a wide variety of roles they are looking to fill. These are a few sample job titles that are highly attractive:

global blue - BI/ Data QA Engineer - Austria, Vienna

Show Interest Here


Working in an Agile Environment, you will be part of the DATA team and be responsible for performing quality assurance tasks with focus on Database and Business Intelligence (BI) Tool testing. You will have the opportunity to work with cutting edge technologies and help us drive forward Global Blue´s Business Intelligence to the next level.

Within our team, over time you will be able to grow your competence and responsibility in the direction of a data analyst, so you will become an important counterpart for the stakeholders and product owners in the analysis and design phases of the delivery, being in charge of supporting in the analysis and requirements engineering, data profiling, and acceptance criteria definition in the early stage of the delivery.


  • Testing of the functionality and output of MS SQL Server SSIS packages and stored procedures.
  • Close cooperation with the Engineering, Product team and Stakeholders.
  • Being involved in the full SDLC including all testing aspects (test planning, test execution, reporting)
  • Creating, maintaining and executing manual/automated test cases based on defined requirements and acceptance criteria.
  • Creating and maintaining needed test data.
  • Taking an active role in improvement endeavours.
  • With time, grow in the role of Data Analyst


Desired Skills and Experience

  • University degree or relevant working experience
  • 2-5 years of experience in testing / quality assurance (QA)
  • Solid understanding of SQL, database definition and manipulation languages/methods.
  • Solid understanding of review and testing processes.
  • Optimally experience with MS SQL Server, SSIS packages and stored procedures
  • Experience in BI reporting tools (ideally MicroStrategy and/or Tableau) or readiness to acquire these skills

Personal qualifications

  • Ability to adhere to work plans and track assignments with minimal guidance
  • Independent and self-motivated personality
  • Excellent team player
  • Excellent problem-solving skills
  • Excellent command of English in spoken and written form


  • Work in a fast-growing international company
  • Most state-of-the-art office environment
  • Development and potential to grow
  • Multi-cultural environment
  • Challenging job
  • Relocation support
  • Flexible working time
  • Training
  • Good location close to public transport
  • Referral bonus
  • Team building, party and events
  • Other Benefits: wellbeing, company doctor, meal vouchers, discounts etc.


Global Blue guarantees a competitive and performance-related salary depending on your professional and personal qualifications. As required by Austrian federal law we hereby state, that the guaranteed legal annual Min. compensation according to the Austrian collective agreement amounts to 36.530,12 € however, the total compensation is a subject of individual agreement.


global blue - Data Scientist - Austria, Vienna

Show Interest Here


We have an exiting opportunity for a Business Intelligence (BI) Data Scientist that will help the company to use collected data in order to understand trends, solve business problems, and monitor metrics. You will assist with determining the data the company needs, structuring it in an appropriate format, analysing the data using queries. With high level of autonomy you will be responsible for development of a new Business Intelligence reporting solutions by transforming and modelling of large volume real-time data and visualisation for decision-makers.

Working closely with Product and Engineering team, you will be in charge of:

  • developing advanced analytical and machine learning solutions,
  • implementing advanced analytics and statistical / complex modelling to maximize proprietary outlook capability,
  • integrating relevant data and insights into dashboards to be directly and independently consumed by decision-makers,
  • identifying opportunities to improve efficiency of current data processes,
  • developing new insights and reporting solution.

The position will require travelling 2x a month.



  • Perform in-depth analysis of a large volume of data to unveil business insights by applying smart statistic methods
  • Implement and run business relevant prototypes of advanced data management processes (including Machine learning and image recognition) applied to performance management and end-customer analytics
  • Leverage complex statistical model to support the forecasting activities.
  • Define advanced process to integer unstructured proprietary and 3rd party data into data ecosystem to elevate the depth or breadth of business insights
  • Turn Proof of concept into pilot and collaborate with technical team for productive implementation
  • Participate into the design and improvement of BI products and services (Interactive & static reports, advanced analytics on customer behaviours, drivers analysis, etc.).





  • University degree in the field of Statistics, Applied Mathematics, Economics, or any equivalent.
  • 3 years of experience in advanced Business Analytics or Business Intelligence or Data Science. Out of that at least 2 years of experience in advanced data modelling.
  • Experience in advanced analytics including machine learning and Artificial Intelligence (AI) processes, data forecasting or large volume data analysis is a plus.
  • Proven track record in data validation and end-to-end testing
  • Strong with SQL, Python, R and other quantitative and statistical modelling technologies and tools.
  • Experience with BI and reporting tools: MicroStrategy or Tableau is a plus.
  • Fluent in English


Personal competencies

  • Excellent communication and presentation skills
  • Capability to connect the dots across multiples information sources to form hypothesis and structure action plan for new innovative products
  • Ability to work independently and within multi-functional team.
  • Pro-active attitude and ability to act as full owner of the business projects.
  • Willingness to travel


  • Work in a dynamic international environment
  • Inspiring colleagues
  • Flexible working time and distance/remote work in order to protect health of our employees and their families during the pandemic situation.
  • Other benefits: trainings, language courses, meal vouchers and company doctor

United Health group - senior data scientist - Remote, dublin, ireland

Show Interest Here

About the role:

This role is based within Optum Insight as part of Payer Decision Intelligence. Optum is the technology and services division of UnitedHealth Group, the leading international health insurer. The primary function of the Data Science team within Payer Decision Intelligence is to build machine learning solutions and put them into production.

We are looking for an additional Senior Data Scientist to enhance our capacity to develop the analytics in support of our business insights. As a key member of the Data Science Team, you will be part of UnitedHealth Group’s mission of helping people live healthier lives. You will work with other Data Scientists and subject matter experts to conduct and manage outcomes of various analytic studies. As part of this team, you will be empowered to analyze data, create solutions and build production-ready models. Join us! There’s never been a better time to do your life’s best work

Primary Responsibilities:

  • Analyze, review and interpret complex data through the development of case-studies, enabling the data to tell a story
  • Build and maintain production-ready machine-learning models
  • Identify drivers of patterns/behaviors uncovered in data
  • Contribute to the design and realization of analytical based tools/assets to identify and monitor non-standard claim patterns
  • Engage with subject-matter experts to explore business problems and design solutions
  • Present analysis and interpretation for both operational and business review and planning
  • Support short and long term operational and strategic business activities through the use of data and develop recommended business solutions through research, analysis and implement when appropriate

You will be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role, as well as providing development for other roles you may be interested in.

Required Qualifications:

  • Degree in Analytics, Statistics, Physics, Mathematics, Engineering, etc with a significant quantitative aspect
  • Data Science experience with Python or R
  • Experience developing machine learning models
  • A history of working on Analytical Projects, for example; exploratory data analysis, feature engineering, fitting, tuning, evaluating and comparing models etc
  • Ability to communicate effectively to clients and the business in writing and verbally


Preferred Qualifications:

  • Experience with relational databases
  • Experience with Spark/PySpark
  • Experience in at least one data visualization tool for displaying and exploring analytic results such as Plotly, R Shiny, Matplotlib or similar

Please note you must currently be eligible to work and remain indefinitely without any restrictions in the country to which you are making an application. Proof will be required to support your application.

Current Health - Data Engineer - Remote, United Kingdom

Show Interest Here


Sounds great, what experience do I need?

  • Have a degree in Computer Science, related field, equivalent training or work experience
  • Commercial experience in areas of distributed real-time stream processing and complex event processing tech
  • You will have experience working with large amounts of data
  • Have a deep knowledge of at least one modern programming language and a willingness to learn new ones as required
  • Have experience writing tests and testable code
  • Be comfortable reviewing, releasing, deploying and troubleshooting your and other people’s code
  • Bring experience to the team in areas of distributed real-time stream processing and complex event processing tech
  • Have previous success in engineering at scale in a distributed systems environment
  • Have a practical understanding of cloud computing and networking – we use AWS with Nomad for micro-service management
  • Have experience collaborating with data scientists, product teams and other consumers of data assets


Bonus points for…

  • Familiarity with key big data technologies, such as Hadoop, MapReduce & Apache Spark.
  • A background involving Apache Kafka or other distributed data streaming platforms
  • Experience with API design/development


Technologies we use

  • Backend: Java (Spring), Python, .NET
  • Frontend: JavaScript (TypeScript), Angular, Ionic, npm
  • Databases: PostgreSQL (RDS), Couchbase and others
  • Infrastructure: Linux, RabbitMQ, AWS via Terraform, Chef, Nomad, Consul and Fabio
  • Data Science and ML: H2O, Jupyter, TensorFlow, Keras and Spark
  • Monitoring: DataDog and ELK


Spec your own environment

Salary Exchange Pension scheme (5% employee, 3% employer contribution)

Private Medical Insurance through Vitality

2 x Life Assurance cover

Critical Illness cover

Employee Assistance Program

£10 pcm flex pot to use toward benefits in our Benni benefits portal

On call allowance (Only payable if and for so long as you provide on call services)

Flexible, autonomous working environment

Bike to work scheme

Give as you earn through payroll

Monthly snack box

MIMI.io - Data Engineer - Remote, In Office, Berlin, Germany

Show Interest Here

Our mission at Mimi is to give everyone the best possible hearing experience. We are the world’s leading provider of hearing-based audio personalization and digital hearing test technologies.  Our team has developed a biologically-inspired and proprietary audio processing technology that knows how well you hear in order to personalize your listening experience. Mimi’s technology can be integrated into consumer electronics devices, such as headphones, smartphones, TVs, in-flight entertainment systems and a range of systems and platforms.

To help us bring amazing hearing technology to more people around the world, we are looking for a Data Engineer (f/m/d) for full-time employment in Berlin.

NOTE: With this role we are happy to support flexible working and accept candidates looking for a working week of 32-40 hours (80-100%)

As our Data Engineer at Mimi you will:

  • Take ownership of Data Engineering topics at Mimi and lead projects from scratch.
  • Work on designing and implementing a data strategy to improve our data management
    • Work with research and product teams on ETL/ELT pipelines between production and analytical database
    • Support design and implementation of data formats in various research and product contexts
    • Support maintenance of Mimi’s databases containing user profiles as well as internal scientific study data
    • Assist with designing and implementing of data queries and visualizations on OLAP for internal purposes
  • Interact with diverse teams within the company (research, product, marketing…) to consider and take into account their data requirements
  • Evaluate different implementation strategies and technical trade-offs
  • Share your knowledge of Data Engineering practices and potentially mentor others
  • Solid knowledge and experience with:
    • database design
    • data pipeline design
    • Python (NumPy, pandas)
    • SQL (PostgreSQL)
  • 2+ years of experience in setting up and maintaining pipelines for datasets of all sizes
  • Fluency in English with great communication skills

It would be great if you additionally have:

  • Education in data engineering, computer science, mathematics, or a related field
  • Experience working with personally identifiable data and medical-grade data
  • Some knowledge and experience in:
    • Scala, R
    • Cassandra, CouchDB, DynamoDB
    • Docker
    • Mongo DB
    • Analysis and visualization of scientific and/or health data
    • Working with data warehouses
    • Documentation tools like Confluence
  • Interest in audio, music, sound, or hearing
  • Foundational knowledge in sound generation and/or signal processing
  • Startup mentality with a proactive attitude
  • A top-caliber, diverse and international team of dedicated domain experts
  • A high degree of flexibility in organizing your work and full responsibility for your own projects
  • Working from home opportunities
  • The opportunity to make an impact on an untapped market and contribute to the development of a truly innovative product and company
  • An entrepreneurial experience in a young, fast-moving startup located in the heart of trendy Berlin-Friedrichshain
  • Yearly education budget and a company culture that nourishes personal growth
  • Modern working equipment plus plenty of audio perks and cool toys to try out
  • Weekly company meals, team-building events and an annual summer trip
  • Flexible holiday policy
  • Monthly Lieferando credit

Find out more about us on our website: mimi.io

We know that some people have a tendency not to apply to jobs unless they meet 100% of the requirements. However, if you think that this job is great for you then please, send in your application!

ASOS - Data Scientist - London, United Kingdom

Show Interest Here

ASOS Technology is going through an exciting period of transition and major investment. This includes a number of strategic programmes to deliver the amazing technology and business solutions to support our ambitious global growth plans. At the heart of these plans is the rebuilding of our digital platforms and channels to provide the best shopping experience for our customers. Our plan is designed to enable us to really put our mobile experience first, enable personalisation and support a data driven organisation. We are also making significant investments in all our Buying, Merchandising, Finance and People systems with the latest toolsets and applications to accelerate the next phase of our global growth. We are also improving our ways of working within Technology to enable autonomous platform development and improve our engineering and agile practices.

ASOS is one of the UK’s top fashion and beauty destinations, expanding globally at a rapid pace. Our values are to be authentic, brave and creative, and we live and breathe these in everything we do.

We believe fashion can make you look, feel and be your best and, with technology in our DNA, we deliver the latest trends to our digital-obsessed 20-something market. Our award winning Tech teams sit at the heart of our business. We deliver technical innovation and pioneer incredible solutions, which are crucial to our continued success. We’re extremely ambitious and thrive on the individuality of our amazing employees. Our values encompass everything needed for our tech people to be the thought leaders of tomorrow.

Data Scientist

We are looking for Data Scientists to join our team and play a key role in helping ASOS provide the best shopping experience to our millions of customers. The role offers broad exposure to ASOS, requiring close collaboration with retail, marketing and technology divisions. You will be part of a highly innovative AI platform working alongside engineers and fellow scientists to solve and productionise interesting and difficult problems and leveraging cutting edge technology. At ASOS, as an online only retailer, we have unique datasets – transactions and click streams for millions of customers and photos, videos, and text descriptions of hundreds of thousands of products.

The ideal candidate will have a strong technical background and experience solving tough problems with large datasets. You will be a highly intelligent self-starter, able to work independently with a strong attention to detail.

What You’ll Be Doing…

  • Working in a cross functional team, alongside machine learning scientists, engineers, product owners and non-technical stakeholders, creating new and improving internal and external facing data products
  • Driving measurable impact across the business through advanced analytics and statistical analysis
  • Working out where the most value is and helping set up frameworks for evaluating algorithmic improvements
  • Presenting data and insights in new and innovative ways using data visualisation tools and storytelling
  • Keeping up with relevant state-of-the-art research, taking part in reading groups alongside other scientists, with the opportunity to create novel prototypes for the business, and publish at top conferences

We’d love to meet someone with…

  • A degree in Computer Science, Physics, Mathematics or a similar quantitative subject
  • A solid understanding of statistics (hypothesis testing, regressions, random variables, inference)
  • Comfortable with presenting back to technical and non-technical stakeholders through effective data visualisation and building of reporting frameworks
  • Experience accessing and combining data from multiple sources and building data pipelines, including a good knowledge of SQL
  • Comfortable working in a Python data science tech stack (e.g. pandas, NumPy, scikit-learn, PySpark, PyMC3, Dash, Plotly)
  • The ability to work collaboratively and proactively in a fast-paced environment alongside both scientists, engineers and non-technical stakeholders
  • A ‘hackers’ mentality, comfortable using open-source technologies.

An added bonus if you have…

  • An advanced degree in Computer Science, Physics, Mathematics or a similar quantitative subject
  • Experience in using advanced statistical methods to solve problems. This can either be through academic projects and publications, or experience analysing and solving problems within industry
  • An understanding of the e-commerce problem space
  • A basic knowledge of software development lifecycles, engineering, and machine learning practices (Data pipelines, API workflows, CI/CD deployments, DataOps, MLOps)


SEnsat - Data anayst - london, united kingdom

Show Interest Here

The role
We’re looking for a Data Analyst to join our engineering and research team. In this role you will grow our analytics and data pipelines, instrumenting our products and the wider business. You’ll work closely vwith different stakeholders to answer hard analytical questions that span from user behaviours, to drones operations, to the digital modelling of the physical world and our business performance. 
You will be a champion of a data-driven culture and influence decision making at different levels of the business. You are a good communicator; able to discuss queries from teams and knowing when to push back; you are comfortable presenting insights to the rest of the company.
You’ll proactively work towards those queries being driven and curious, enabling the organisation to gain visibility, react, and track this information, and ensuring the data remains available and accessible.
What you’ll do:
    • Learn and leverage our existing systems to gain insights into our user and platform, we are looking for a curious specialist to help us expand our telemetry, dig into the data further and help steer decision making.
    • Identify problems that need to be solved and inform product and engineering teams.
    • Design, develop, analyse and report your own metrics for the wider organisation, and the ability to define their quality and statistical relevance.
    • Analyse data, derive insights and hypothesise new features to improve user engagement and other product metrics.
    • Integrating data from other systems and solutions across the business working with the teams that own those systems to make the transfer seamless.

What you’ll bring:

    • 2 years + experience in a similar role as analyst or consulting.
    • Hands on experience with databases and SQL and the ability to manipulate different data sets (structured, geo, real time).
    • Proven experience in integrating different data sources and systems through APIs.
    • Proven experience with Metabase, Kibana, Grafana or similar solutions.
    • Experience with Looker, Mixpanel or similar solutions
    • Ability to self-direct, plan work and suggest new solutions to the broader business.
    • Experience in a SaaS business is a plus.

What we bring:

    • We treat compensation as bespoke and collaborative and want to work with you to make sure you get a package that you are happy with
    • Competitive Salary 
    • 30 days holiday per year (not including bank holidays)
    • Flexible / Remote working
    • £1500 per year personal training budget
Sensat is proud to be an equal opportunity employer. We do not discriminate based on race, ethnicity, colour, ancestry, national origin, religion, sex, sexual orientation, gender identity, age, disability, veteran status, genetic information, marital status or any other legally protected status.

tomtom - expert software engineer (maps background) - ghent, belgium

Show Interest Here

What you’ll do

  • Work on tooling related to automated map making, not limited to implementation work but as well data analysis, design and deployment
  • Devise innovative ideas for solving problems and translate these ideas into technical designs
  • Be a contributor to our system architecture, able to prototype proposed solutions whilst giving support to the architect
  • Be the go-to person to give advice, on software development and proof of concepts, in and outside the team. Place your stamp on the team through the mentoring of more junior Software Engineers
  • Design, implement and maintain state-of-the-art algorithms in Java
  • Work on the automation of data handling and data structures
  • You will dive into the pile of metadata we gathered on our processes and write tooling to gather insights
  • Use those insights to make recommendations to streamline the process
  • Mentoring less experienced team members, working together on topics is something you feel comfortable with
What you’ll need
  • University degree in computer science or equivalent
  • 8+ years of experience in complex software development
  • You have solid experience with object-oriented programming languages and impressive coding skills
  • You are willing to work with Java and upskill your Java background
    We are a problem solving company and prefer using the right tool for the job at hand, without restricting us to one language (e.g. machine learning applications typically done in Python)
  • Experience with automated deployment and testing techniques and tools (e.g. Kubernetes)
  • Knowledge of Cloud computing (e.g. Azure) or Big Data (e.g. Spark)
  • Never-ending curiosity: technologies, paradigms and clean code
  • Good knowledge of design patterns and clean code principles
  • You enjoy working as part of a self-organizing team, keeping the focus on the team goals.
  • Excellent communication skills, giving constructive feedback
What’s nice to have
  • Agile thinking
  • Pro-active, pragmatic attitude: building solutions over writing papers
  • Interest in or experience with data science / machine learning
  • Being familiar with mapping/GIS principles is a plus

mirriad - backened software engineer - remote

Show Interest Here

About You:
You’re a back-end developer passionate about building scalable, maintainable and performant
You write Java or Kotlin working with frameworks (probably Spring Boot) to design and build those
services. You use TDD to write clean, readable code but don’t shy away from making pragmatic
choices to get the job done.
You partner with your colleges on the front-end to define and implement clean and reusable RESTful
You work with your colleges on the backend-end to value and drive consistency and reuse but never
at the expense of effective and timely delivery.
You want to grow beyond the job you’re doing now.
Maybe you want to share your knowledge and experience mentoring more junior colleagues, maybe
you hanker for more design work or maybe you want to work with up-to-date containerisation and
orchestration technologies (Docker, Kubernetes). Maybe you want to ditch all that legacy code and
build something new.
Maybe you just want to work with smart people doing cool things like Machine Learning and
Computer Vision.
Maybe you want to work at Mirriad.
Candidate Requirements:
 Design and development of microservices implementing RESTful APIs in Java
and/or Kotlin (ideally using Spring Boot)
 Development using automated tests
 Confident in continuous refactoring and evolutionary design
 Experienced using version control systems (We use Bitbucket)
 Experience working in an Agile environment
 Working knowledge of Linux, AWS and Docker
 A natural when working with CI/CD pipelines
 Ability to debug and troubleshooting within distributed systems deployed on
 BSc in Software Engineering/CS or relevant Technical Degree.
Bonus Points:
 Work experience in the digital advertising tech space (Ad Servers/DSP/SSP) or
for a broadcaster/content owner with multi-platform distribution.

huub - data engineer - porto, portugal

Show Interest Here

We’re not a logistics company with technology working for the fashion market. We’re a Tech Fashion Startup shaping the Supply Chain of the future. Our four founders and an early-adopter brand started this path in 2015 and we have rapidly grown into a global business, delivering the dream beauty of fashion in over 1M pieces to 123 countries worldwide. To keep on growing we need the right people to engage with the apparel and to build and develop great software. Spoke is our tech product that brings full visibility, flexibility and control to the fashion universe like never before enabling hundreds of brands to globally scale their business.


What’s the job HUUBout?

In this position you’re able to work on the core of our business, entering and developing a tech ecosystem with an architecture built to aggregate an end-to-end and omnichannel solution that, at the same time, is abstract enough to integrate with other global parties, granting the foundations of all business fundamentals.

You will be integrated into the Data Engineering team, being responsible for helping maintain and improve the data architecture and tools. 

What you’ll do:

  • Design and build scalable & reliable data pipelines (ETLs) for our data platform
  • Constantly evolve data models & schema design of our Data Warehouse to support standard and self-service business needs
  • Work cross-functionally with various teams, creating solutions that deal with large volumes of data
  • Work with the team to set and maintain standards and development practices

Who you are:

  • You have 2+ years of experience in:
  1. Building and maintaining data pipelines in a custom or commercial ETL tool (Talend, Pentaho Kettle, SSIS, etc)
  2. Experience with relational SQL (T-SQL, PostgreSQL, MySQL, etc) and NoSQL (MongoDB, CounchDB) databases
  3. A Data Warehouse environment with varied forms of source data structures (RDS, NoSQL, REST API, etc)
  • Good experience in creating and evolving dimensional data models & schema designs to improve the accessibility of data and provide intuitive analytics
  • Skilled in Python
  • Experience in working with a BI reporting tool (PowerBI, Tableau, etc)
  • Fluent in English, both written and spoken
  • You have good analytical and problem-solving skills, the ability to work in a fast-moving operational environment and you are enthusiastic and with a positive attitude
  • Experience/Certification in Google Cloud Platform (BigQuery, Dataflow, etc.) is a big plus
  • Being familiar with Data Streaming (Kafka) is a plus
  • You are familiar with continuous delivery principles such as version control with git – unit and/or automated tests is a plus

Welcome to our world:

  • Fast-growing global company
  • Young, ambitious and innovative environment empowering personal and professional development
  • Project-basis mindset with a multidisciplinary approach towards success
  • Agreements on flexible scheduling
  • Autonomy and responsibility for your work
  • Strong team culture and spirit
  • Multiple opportunities with great career prospects
  • We are committed with equality of opportunities regardless of age, gender, sexual orientation, race, religion, belief or any other personal, social or cultural background and existence.

173tech - data engineer - remote, london, united kingdom

Show Interest Here

We are a modern analytics agency that helps companies of all stages unlock growth potentials with data. We specialise in helping clients establish their analytics function or transfer to modern analytics technologies whilst guiding them on how to extract maximum value from their data. Our core team previously built and scaled analytics at Badoo and Bumble, a startup unicorn of dating apps.

Our three key DNAs are SpeedAgility and Best Practices. We achieve this by working closely with founders and their core teams, building and continuously improving on SAYN (our own open source framework), and having loads of fun doing it!

We are looking for a highly energetic data engineer that is, like the rest of us, a perfect blend of technical capabilities and business focus. You will work on an exciting range of projects, try out a variety of tools and have a direct impact on our clients’ products and customers. You will learn and grow with the team and our clients.

What You Will Do

This is a great opportunity for anyone keen to work with many of the latest analytics technologies, contribute to our open source data processing framework and work on various types of projects. You will be working with some of the most inspirational and energetic startup teams. Please see below more details on the tasks you will be working on:

  • Advise on optimal infrastructure setup based on the client’s needs over a wide range of open source and cloud-based solutions.
  • Set up and deploy analytics infrastructures using cutting-edge solutions including SAYN (our open source data processing framework).
  • Build custom data extractors from various 3rd party sources using Python.
  • Design and implement in-warehouse data models using SQL.
  • Actively contribute to the development of our open source data processing framework SAYN (this will be between 30 and 50% of the role).
  • Contribute to our library of data engineering best practices and documentation.

Who You Are

We are looking for a data engineer with a strong technical understanding. You are interested in the latest analytics technologies and always eager to discover new solutions that can improve infrastructures. Below is the list of the skills we are looking for. Do not worry if you do not cover all of those yet, the rest of the team will mentor you and help you get there!

Technical Skills:

  • Excellent SQL skills. You have a deep knowledge of the language and are fully proficient with it.
  • Excellent Python knowledge and strong understanding of object oriented programming. If you already contribute to open source projects, this is a plus!
  • Good command line skills.
  • Knowledge of git (or any other version control system).
  • Knowledge of the main analytics databases (Snowflake, Redshift, BigQuery, etc.) is a plus.
  • Knowledge of the various open source and cloud solutions available in the analytics ecosystem (AWS, GCP, Stitch, Fivetran, Looker, DBT, Airflow, etc.) is a plus.
  • Knowledge of event tracking technologies (e.g. Snowplow, Segment, etc.) is a plus.


  • Ability to work well in collaborative environments and deliver in a timely fashion.
  • Effective communication skills, both written and oral. Ability to communicate complex and technical topics in a simple manner.

Experience & Education:

  • A degree / master’s level education, ideally within a technical discipline, e.g. Computer Science, Data Science, etc.
  • You could be graduating or have up to 2 years in a data engineering role or in a software engineering role and keen to transfer to data engineering.

Other Skills:

  • Strong ability and desire to learn. We work with a multitude of technologies and always strive to discover the next solution that can benefit our clients.
  • Fluency in any of the European languages is a plus.

Why Work With Us

  • Competitive salary.
  • A focused yet friendly and fun working environment.
  • Working with and having a direct impact on the growth of some of the most innovative startups of our day.
  • Constant learning opportunities on the job while being supported by top-tier talents in the industry.
  • Flexible work arrangements.

konux - Data engineering team lead - munich, germany

Show Interest Here

The Role

We have built the KONUX Predictive Maintenance System for Rail Switches, the first AI-based solution for the rail industry. As a Data Engineering Team Lead at KONUX you will be responsible for managing a large research data resource that is growing rapidly in both size and complexity. You will take pride in ensuring that the data science team has the best possible access to well-organized data sources. You will also act as an interface with our production team, ensuring alignment of model data requirements. You will, of course,  be automating data management processes and be responsible for data cleansing. Ideal candidates learn and adapt quickly and will be able to use every tool at their disposal to understand and effectively tackle hard problems. You will be working in a Linux environment.

When pioneering a new technology, the unexpected is expected. Thus, the KONUX Culture includes a growth mindset, resilience and flexibility. This means: seeing setbacks as learning opportunities and encouraging you to drive harder. Improving continuously by failing fast and learning fast. Staying up when the going gets tough. Thinking and acting without barriers within a highly dynamic environment, based on data driven decisions. If you share that mentality…Apply and become a part of the KONUX vision: Transform railway operations for a sustainable future!

Your Responsibilities

  • Lead a team of Data Engineers working on complex research projects
  • Develop your team members to reach their full potential by mentoring and training them, creating a space with opportunities for personal and professional growth
  • Support and guide them towards success and towards being a high performing team
  • Actively participate in recruiting and growing the team
  • Ensure the consistency and quality of a wide range of raw data sets, including millions of time series recordings from the field
  • Lead and develop a team of data engineers, by providing technical and functional mentorship and guidance
  • Assist with the development of the back-end (and design, and/or front-end too) for web-based data visualization applications
  • Provide database support and management of research model input and output data, including preparation of schema
  • Create functional specifications and schema for new and existing data management processes
  • Manage the data lake infrastructure, including services like Spark/EMR, Hive and Athena
  • Write interfaces to other systems for data gathering and data acquisition
  • Develop data acquisition monitoring and accounting processes that alert users to missing data
  • Automate the data cleansing and data pruning processes

Your Profile

  • A degree in Computer Science, Engineering, or a related field
  • 5-7 years in a Data Engineering or very similar role
  • Proven experience in leading technical teams towards success
  • Proven experience in coaching/mentoring team members
  • Proficiency in SQL and Python
  • Advanced proficiency of Spark/EMR, AirFlow, Hive, Athena, Redis
  • Advanced knowledge of different database technologies like SQL, NoSQL, object-relational databases
  • Experience in database schema design and database management in a business environment
  • Ability to understand and deliver in a complex and rapidly evolving data and product environment
  • Demonstrated examples of working with business partners to deliver a solution that meets project objectives (requirements generation, functional specification generation to project execution)
  • Familiarity with the fundamentals of machine learning
  • Strong interpersonal communications skills (verbal, presentation and written) 
  • Autonomous and solution-oriented working methods
  • An understanding of traditional business intelligence and data warehousing concepts (ETL, data modeling, and reporting)
  • Experience in leading, couching and mentoring teams
  • Sharing the KONUX Culture, incl. a Growth Mindset, Resilience and Flexibility

Nice to Have

  • Problem solving, and project management skills
  • Able to work closely with others in collaborative environment
  • An understanding of traditional business intelligence and data warehousing concepts (ETL, data modeling, and reporting)
  • Teamwork, creativity and communication skills

Don’t Apply If

  • You have never worked in data engineering
  • You have never touched on of there technologies: Spark/EMR, AirFlow, Hive, Athena, Redis
  • You feel that the KONUX Culture is not for you

inawisdom - senior data scientist - Multiple European Locations

Show Interest Here

We are currently recruiting for a Senior Data Scientist to join our fast growing and successful Machine Learning and Data Science Consultancy. The successful candidate will lead on a variety of projects with customers across Financial Services, Energy and Utilities, Medical Research, Sports and Entertainment and Engineering, amongst others. You will lead in developing and implementing algorithmic tools to best gain insight from consumer data. You will research and apply data modelling, machine learning and data mining techniques to ensure products creates an up to date and accurate platform to provide client-friendly insight. You will self-direct your work to ensure best-in-class productionised analytics and insight generation. You will act as lead consultant or mentor to less experienced members of the team, so first class communications skills are a must.

To successfully deliver in this position you will have:

  • Masters level or above in statistical discipline or relevant degree (or equivalent experience).
  • Commercial experience with big data sets.
  • Experience developing and implementing state of the art machine learning techniques (NLP, Deep-Learning, Neural Networks, for example).
  • Python, Javascript, Big Query, AWS Amazon Web Services, Pandas, Numpy, Tensorflow, Scikit-learn.
  • Experience with behavioural, financial and social data.
  • Experience cleaning data.
  • Experience managing a project end-to-end.
  • Ability to discuss complex topics with both technical and business audiences.

crossland - senior data engineer - remote, berling, germany

Show Interest Here

Who we are looking for

We are looking for a solution-oriented Senior Data Engineer with strong communication skills to join our team. The right candidate will become the owner of multiple projects & processes on our Data Platform and will help us build a state of the art data processing solution. You will work in cross-functional, feature-oriented squads such as the Spotify model and you will support your colleagues and team members in sharing knowledge in pair-programming and coding review sessions.

What we offer

  • Be part of a growing team making fast decisions, where you can watch your ideas and actions come to fruition. This is what adds up to time well-spent – not mere billable hours
  • Diversity: intellectual as well as cultural – join a welcoming international team of smart and open-minded people, where it’s easy to make friends
  • A Personal Development Plan, along with access to dedicated resources to ensure that you can be the best in your role
  • Work-family-friends balance – step off the treadmill and feel like a human again: our office is all about maintaining a healthy balance between a results-driven work environment and your all-around wellbeing
  • An opportunity to be the change you want to see: at CrossLend you can use your skills to not only make a good living, but to enhance the transparency of the financial ecosystem
  • Creative ownership – drive the business forward with your ideas, launch projects from the ground up and see them through from inception to completion, giving your input and galvanising your colleagues while you bring out the best in each other

What you bring on board

  • 3-5 years of professional experience as a data engineer in a fast-paced environment
  • Master’s degree in Computer Sciences, or Master’s degree of Sciences in Engineering, or Physics and 2+ years of engineering exp and 1+ years of management exp, OR regardless of uni degree, 5+ years of engineering exp and 2+ years of management experience
  • Expert knowledge in Python and SQL
  • Ability to make decisions to ensure the software which is built, is architecturally consistent and of high quality
  • In-depth understanding of data structures and data pipelines (incl. performance related dependencies)
  • Experience with data integration, data transformation and database Schema Design and Evolution
  • Experience in building RESTful APIs (e.g., Flask or FastAPI)
  • Knowledge in CI/CD and DevOps practices
  • Experience with Docker and Kubernetes is a plus
  • Background in software testing
  • Ability to communicate with our Data Scientists (being able to read R is a plus)
  • Familiarity in distributed processing (Apache Spark/Apache Flink)
  • Good knowledge of Linux and Shell
  • Ability to develop data pipelines/workflows (Apache Airflow)

Why do we need you

  • Model ETL processes, transform and sanitise data, schedule workflows, transform and transition data with Apache Airflow
  • Connect our internal data warehouse with external sources
  • Assure reliability, consistency and availability of data flows
  • Create a robust and scalable framework for connecting external interfaces
  • Ensure data science models and algorithms can be run on production level
  • Support the data science and backend teams in terms of SLA
  • Support your colleagues and team members, working collaboratively and sharing knowledge in pair-programming and coding review sessions
  • Report directly to the Engineering Manager Data

What we are doing

CrossLend is developing a platform to connect institutional sellers (originators) and buyers (investors) of loans on a large scale. Originators are providing historical data that are used to create prediction models. Based on the predictions, investors can run several types of analytics to define their investment criteria. Once an investment is done, the platform will be fed with updates about the loan performance to provide further portfolio analysis. This provides investors with a high level of transparency to prevent crashes like the one that happened in 2008. In an additional step, the purchased loans can be resold on the platform to create a liquid market for loans to enable a better flow of capital between countries in the European Union.

*All qualified applicants to our hiring partners are considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran’s status or any other protected characteristics.

Post a Job for Free

Europe Hiring Partners

3 Easy Steps

  1. Upload resume and complete your profile
  2. Show interest in any open positions
  3. Complete free assessments to upskill while you wait

Employers will be notified when you show interest to open roles. They will reach out directly to you if you’re a good fit.

upload resume now

Career Lab Speakers

Build your Own Job
Build your Own Job

Jack Raifer | Head of Data Science | ADA Intelligence

Keep it simple: how to talk to executives in an effective way
Keep it simple: how to talk to executives in an effective way

Daniela Petruzalek | Lead Data Architect, Executive Director | J.P. Morgan

Academia, Startups, and Enterprise: A Cross-Analysis of Work and Goals
Academia, Startups, and Enterprise: A Cross-Analysis of Work and Goals

Dan Shiebler, PhD | Staff ML Engineer | Twitter

How I became a data science consultant and other stories
How I became a data science consultant and other stories

James Keirstead, PhD | Senior AI Consultant | DAIN Studios

Live Schedule

10:40am BST – 11:20am BEST Daniela Petruzalek – Join Here

11:25am BST – 12:05am BST – James Keirstead, PhD – Join Here

2:00pm BST – 2:40pm BST – Jack Raifer – Join Here

2:45pm BST – 3:25pm BST – Dan Shiebler – Join Here

Join Here

Wednesday, June 9 from 12:40 PM – 1:25 PM BST (GMT+1)

Join Here

Tuesday, 8th of June and Wednesday, 9th of June from 10:00 AM to 5:00 PM BST (GMT+1)

Join Here

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google