Infrastructure / Platform Engineer
Known
Location
San Francisco
Employment Type
Full time
Location Type
On-site
Department
Engineering
About the Role
You’ll be the foundational engineer owning Known’s core infrastructure and platform systems — the backbone that powers our AI-driven matching, voice, and scheduling experiences. From cloud infrastructure and data orchestration to performance monitoring and model deployment assistance, you’ll design and scale the systems that make Known fast, reliable, and secure.
You’ll work directly with the founding team (AI/ML, product, and design) to establish Known’s technical foundation — shaping not just our architecture, but our engineering culture and best practices from day one. This role is ideal for a pragmatic builder who enjoys going from “blank slate” to production and thrives in early-stage environments where reliability, velocity, and simplicity matter most.
Responsibilities
Design and manage cloud infrastructure (AWS-first, with IaC via Terraform).
Establish CI/CD pipelines and best practices for rapid, safe iteration (GitHub Actions, Docker, Kubernetes, etc.).
Build and maintain scalable data ingestion and orchestration pipelines to support ML and product analytics.
Administer and optimize our databases — PostgreSQL (with pgvector for embeddings) and analytical warehouse.
Collaborate with AI/ML engineers to deploy and monitor LLM and matching models for inference, evaluation, and retraining.
Implement observability (logging, metrics, traces, alerts) across backend services, data jobs, and model endpoints.
Drive reliability and scalability across our web, mobile, and agentic systems — from real-time voice matching to background batch workflows.
Collaborate cross-functionally with product, design, and ML teams to ensure infrastructure aligns with user and business needs.
Requirements
4+ years of experience in infrastructure, platform, or data engineering (startup or high-growth environments preferred).
Strong proficiency in Python, TypeScript, and scripting (Bash/YAML).
Deep understanding of cloud architecture (AWS, GCP, or similar) and Infrastructure-as-Code (Terraform, Pulumi, or CloudFormation).
Solid experience with containerization and orchestration (Docker, Kubernetes, ECS).
Proven ability to design and operate data pipelines and distributed systems.
Experience with PostgreSQL (ideally with pgvector or embeddings), data modeling, and schema design for real-time and analytical workloads.
Familiarity with ML/AI workflows (model training, inference, monitoring) and feature stores is a plus.
DevOps fundamentals: observability, cost optimization, and security.
-
Collaborative mindset, strong ownership, and bias toward shipping working systems fast.
Example Projects
Stand up a data lake + warehouse for storing and analyzing user signals, transcripts, and model outputs.
Build real-time ingestion from app, agent, and third-party APIs (OpenAI, Twilio, Stripe).
Deploy and scale voice agent infrastructure with low-latency streaming, recording, and monitoring.
Design the CI/CD and observability stack for Known’s core services.
Assist ML engineers in Implementing model deployment pipelines for embedding generation and re-ranking inference.
Why This Role
You’ll help define the technical foundation of a product that blends human connection and advanced AI. As one of Known’s first engineers, you’ll make decisions that influence how the product scales, performs, and evolves — from the data stack to the deployment layer.
If you’re excited by the idea of shaping the platform behind a category-defining AI product, this is the place to build it.