Role Description
This role may also be located in our Playa Vista, CA campus.
Applicants in the County of Los Angeles: Qualified applications with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.
Applicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.
In accordance with Washington state law, we are highlighting our comprehensive benefits package, which is available to all eligible US based employees. Benefits for this role include:
- Health, dental, vision, life, disability insurance
- Retirement Benefits: 401(k) with company match
- Paid Time Off: 20 days of vacation per year, accruing at a rate of 6.15 hours per pay period for the first five years of employment
- Sick Time: 40 hours/year (increased to 69 hours/year for Seattle) including 5 discretionary sick days per instance
- Maternity Leave (Short-Term Disability + Baby Bonding): 28-30 weeks
- Baby Bonding Leave: 18 weeks
- Holidays: 13 paid days per year
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Sunnyvale, CA, USA; Los Angeles, CA, USA; Seattle, WA, USA; San Francisco, CA, USA.### Minimum qualifications:
- Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience.
- 10 years of experience with cloud infrastructure.
- Experience building and operationalizing machine learning models.
- Experience in delivering technical presentations, leading discovery and planning sessions.
Preferred qualifications:
- Experience training and fine tuning large models (i.e., image, language, segmentation, recommendation, genomics) with accelerators.
- Experience with performance profiling tools (i.e., TensorFlow profiler, PyTorch profiler, Tensorboard).
- Experience designing/architecting large-scale infrastructure farms for specialist AI use cases.
- Experience with containerization, Kubernetes, Kubernetes on Google Cloud.
- Experience with machine learning benchmarks.
- Ability to engage with C-level or executive business leaders and influence decisions.
About the job -----------------
When leading companies choose Google Cloud, it's a huge win for spreading the power of cloud computing globally. Once educational institutions, government agencies, and other businesses sign on to use Google Cloud products, you come in to facilitate making their work more productive, mobile, and collaborative. You listen and deliver what is most helpful for the customer. You assist fellow sales Googlers by problem-solving key technical issues for our customers. You liaise with the product marketing management and engineering teams to stay on top of industry trends and devise enhancements to Google Cloud products.
Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
The US base salary range for this full-time position is $153,000-$222,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities --------------------
* Be a trusted advisor to our customers, helping them understand and incorporate AI accelerators into their overall cloud strategy by recommending migration paths, integration strategies, and application architecture that incorporate Google Cloud AI optimized infrastructure. * Demonstrate how Google Cloud is differentiated, highlighting the power of accelerators by working with customers on proof of concepts, demonstrating features, optimizing model performance, profiling, and benchmarking. * Build repeatable assets to enable other customers and internal teams. * Influence Google Cloud strategy at the intersection of infrastructure and AI/ML by advocating for enterprise customer requirements. * Travel to customer sites and events as needed. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
About Forward Deployed Engineering
Forward Deployed Engineers are embedded directly with customers to build custom solutions, integrate products into existing infrastructure, and bridge the gap between product engineering and customer success. The role combines deep technical skills with the ability to operate in client environments and translate business requirements into working software.
Originally pioneered by Palantir, the FDE model has spread across AI, enterprise SaaS, and cloud infrastructure companies. FDEs write production code, architect integrations, train customer teams, and feed product insights back to the core engineering organization. At companies like OpenAI, Salesforce, and Databricks, FDE teams are treated as elite engineering units that can ship custom solutions in days rather than quarters.
Typical FDE stack: Python, TypeScript, SQL, REST/GraphQL APIs, cloud platforms (AWS/GCP/Azure), and increasingly LLM APIs and AI orchestration frameworks. Strong communication and the ability to context-switch between technical and business conversations are as important as coding ability.
Similar Roles
Get the FDE Pulse Brief
Weekly market intelligence for Forward Deployed Engineers. Job trends, salary data, and who's hiring. Free.