Apply directly

Upload your
CV (.pdf)

Send

#GETREADY

< Back to vacancies overview

As a Big Data Developer, you will assist in extending a Big Data Platform to meet the needs and requirements of our Client’s Business stakeholders. You will work under the guidance of team leads and the data solution architects, by supporting and enhancing the current Big data and Data Warehouse environment. You will have the opportunity to collaborate with other Data teams and other business as well as IT teams.

Your key Responsibilities: Big data development

  • Collection of business requirements: identify sources of relevant information and sizing development estimations
  • Choose the most suitable tools, attending to their commitments and specifications.
  • Predict speed of every stage of the ETL process, identifying possible bottlenecks and scaling the Infrastructure according to those predictions.
  • Design the ingestion, transformation and classification ecosystem to extract and refine relevant features, and train different Classification, Regression and Clustering models, using real-time Machine Learning with Apache Spark Streaming with Spark MLlib library.
  • Develop the ingestion code: Spark Streaming process for ingestion and transformation, Akka StreamingETL, we plan to include Kafka Streaming and Kafka connectors as soon as it is stable, Rest client webservices to control the state of the frameworks
  • Develop business microservices: using the Fabric8 framework, with Spring Boot, Spring Cloud, Angular.js to build web interfaces and Activiti as a workflow engine.
  • Put the nuts and bolts to deliver a DevOps environment for Continuous Delivery, using Git to maintain the repository code, Jenkins to perform automatic test and deployments, Docker and Mesosphere PaaS, to build a scalable environment for the deployments.

You are ideal for this opportunity if you have:

  • Proficiency in Python and SPARK development
  • Experience working on Hadoop environments with various suite of services (Oozie, HIVE, Impala, ELK, Kafka, Storm, Sqoop etc)
  • Experience working with various formats – XML, JSON, AVRO, PARQUET etc.
  • Strong experience in Data Ingestion and processing on Big Data
  • ETL and Data Warehousing using Microsoft BI Stack
  • Strong self-sufficiency and initiative working on Business Intelligence projects
  • Performance tuning and best practices on Big Data projects

It is a strong plus if you have:

  • Knowledge of insurance practices, process and issues
  • Knowledge of Cloud solutions. [Azure or AWS]
  • Knowledge of CI and CD.
  • (You are) Polyglot in different programming with programming [Java, Scala etc.]

We offer:

ITDS Business Consultants is involved in very interesting and innovative projects at the large and midsized international companies in the financial services industry. The workplace is professional, specialist, driven and inspiring.

This includes:

  • Challenging big data tasks
  • An opportunity to develop expertise in the financial industry
  • A stable and long-term cooperation

We organize social events and you will work in an international environment.

Let’s meet:

We would like to meet you. If you are interested please apply and attach your CV in English or Polish, including a statement that you agree to our processing and storing your personal data. You can always also apply by sending us an email to HR@itds.pl.

 

 

 

 

Our recruiters


Daphne Admiraal, Corporate Recruiter