Intel - Graph Compiler Deep Learning Engineer (Petah Tikva)
Intel is seeking a Graph Compiler Deep Learning Engineer to join its AI engineering team in Petah Tikva. This role sits at the intersection of compiler technology and deep learning systems engineering and focuses on optimizing computation graphs and compiler flows for high-performance AI workloads on Intel platforms.
Role overview:
You will work with compiler and hardware teams to design and implement optimizations that improve throughput, reduce latency, and enable efficient deployment of neural networks across Intel accelerators and CPUs. Your work will directly impact model performance in production systems and influence the end-to-end AI software stack.
Key responsibilities:
– Design and implement graph-level compiler optimizations for deep learning workloads.
– Collaborate with hardware architects and runtime teams to expose and leverage platform capabilities.
– Profile and analyze model execution to identify performance bottlenecks and propose fixes.
– Build and maintain tools for automated benchmarking, validation, and regression testing.
– Assist in integrating compiler output into ML frameworks and serving infrastructure.
Qualifications:
– B.Sc or M.Sc in Computer Science, Electrical Engineering or related technical field.
– Strong software engineering skills in C++ and Python.
– Practical experience with deep learning frameworks (TensorFlow, PyTorch) and model graph representations.
– Familiarity with compiler infrastructures (LLVM, MLIR, XLA) and performance optimization techniques.
– Knowledge of GPU/accelerator programming (CUDA) or optimization libraries (oneDNN, MKL) is advantageous.
Why this role matters:
This position offers the opportunity to work on the hardware-software boundary, accelerating the next generation of Intel’s AI products. If you enjoy low-level systems work, performance tuning, and collaborating across teams to ship impactful AI features, this role is an excellent fit.