Wise Step Recruiting · Remote (Romania) · Remote · Contract · Senior ·B2B ·long-term

Performance Engineer

Python Databricks Spark AWS S3 Delta Lake ETL Performance Testing JMeter k6

About the Role

You will be embedded directly into a FinTech client’s Data Pipeline team (Staff Augmentation model), working toward a critical July 2026 DaaS product launch. The client is actively migrating their data platform from PostgreSQL to Databricks. Your job is to prove the new architecture performs better, under real load, before launch.

There is little to no existing performance testing framework. Assume you are building it from scratch, establishing the baselines, and delivering the quantified evidence that defines launch readiness.

What You’ll Be Testing

Pipeline Performance

Validating the speed and stability of Spring Java ETL jobs and Spark workloads running on Databricks under realistic production load. Identifying bottlenecks, profiling job execution, and defining acceptable throughput thresholds.

Data Loads & Transformations

Benchmarking both incremental and full data reload cycles. Stress-testing specific transformation logic including business key fetch operations — establishing SLA baselines where none currently exist.

Databricks Write Speed & Migration Validation

Measuring write throughput to S3 Delta tables and benchmarking directly against the existing PostgreSQL setup. This is not abstract performance testing — the client is migrating from PostgreSQL to Databricks and needs quantified proof that the new architecture delivers the expected performance lift before July launch.

Requirements

  • 5+ years in performance engineering or load testing, with a focus on backend systems and data pipelines.
  • Hands-on experience performance-testing ETL pipelines or batch processing systems — this is the core of the role.
  • Databricks or Spark pipeline experience — you need to read Spark UI, interpret cluster metrics, and understand what you’re benchmarking.
  • Experience building performance testing frameworks from scratch — no existing harness, no pre-defined baselines.
  • Strong Python scripting for custom benchmark automation and test harnesses.
  • Familiarity with AWS and cloud-native data architectures (S3, Delta Lake).
  • Fluent English — direct collaboration with an international engineering team.

Nice to Have

  • Experience with JMeter, Gatling, k6, or equivalent tools (applicable to API layer validation).
  • Background in FinTech or financial data platforms.
  • Familiarity with Spring Java applications and their performance profiling.
  • Experience establishing performance SLAs and communicating benchmark results to engineering leadership.