Interview Preparation (DeepMind, CoreAI, Efekta)
1st q 2026 - support martin - lot of independence and autonomy.
world is big
boyd is 7 weeks on the road
want to pass the ball - make decisions off the fly
lots of independent decision making
also germany - straddle that
business can’t go without this role. understated but effectively jason’s substitute
- good feedback / consistent resilience / passion
- predict amazing c level engagement
- help define issues and think broadly across the org
- passion for trying to test innovation
- question is more sat with Ade: is this person a strategic operation / broader scale person rather than an specific Inception
- if it weren’t a project role
- is it the best fit.
- mull over - thinking mid-Q4
- encourage other introductions?
- is this the right thing for this person / or other options / strategic operator
- not a ‘no’ for Inception
What Is My Role?
Section titled “What Is My Role?”- great interest in Ade’s team.
- willing to put behind the team. more incentives to do more, and more roles there within the Inception team.
- Ade’s in a position where everything that comes to him that’s non-product facing
- he has to make the call on what to highlight
- one space: he knows the business has a deep interest, and one area that’s important.
- there’s some other stuff that needs to be managed.
- gut call on risk.
- go in with a curious mindset. will be looking for fungibility w/ appetite to take risk.

- Praveen Srinivasan (Co-lead of Project Astra at Google DeepMind)
- Alexandre Moufarek (Product Lead, AI Research at Google DeepMind | Genie 3 | Project Astra | SIMA | Gemini)
- have completed all the internal conversations
- Prav / Alex:
- Prav: technical director / eng focused.
- Alex: product focused.
- Share the complexity of the projects. Can I make it to the end point. What the focus of the endpoint should be.
- High level strategy but also get into the details.
- Fit comes into it.
- Both in Ade’s direct reporting line.
- Alex: where is the hand off happening
- My examples: struggles / complexity of project.
- Key strategic output.
- Very intentional growth at GDM.
- Google is very deeply functional. GDM feels very startup. Need to be a bit more disciplined in terms of process. Come up against a lot less of that. Most novel of ideas coming through.
- front end of that startup race. hand off before even the first decisions need to be made on product.
- zeuban riya leading research teams. josh woodward leads product function.
- very independent. dgm
- not utilizing me if we didn’t explore deeper lead connections.
- biz development for me / connections.
- product across google. test and learn in google products.
- because GDM is a startup and Inception is the startup within it.
- don’t get sucked into ops. go really deep really quickly then pull out.
- leaders struggle?
- sometimes see failure in research where people can’t pivot.
- when they rely on political profile but can’t stay autonomously accountable and move things. decision by committee doesn’t work. take accountability and move.
- knowing that I’d be in a pocket of high innovation - be honest with myself about thinking innovatively constantly for a long period of time.
- THEY WILL ALWAYS CARE ABOUT PRODUCT / output / tech. this is team members.
- need to be honest about whether this is something I want to do day in and day out.
← →
Q&A Key Points
Section titled “Q&A Key Points”- sense check on a scenario.
- pragmatic optimism
- looking for someone about innovation and possibilities.
- gut feel and risk but also deep innovative mindset. people could package things. - what’s the next thing.
- someone comes with triplets, where are the quads. undefined opportunity to commercialize and productize.
- both internal and external
:::note]
- if there are next steps: 4-5 conversations. function is unique. so, connection with product lead, or gen ai lead. or someone in demis’s team. then a people and culture conversation.
- culture within the org, same and differences
:::
Convergence of all the consumer properties.
How do we make it so there’s one Microsoft user experience.
Product side and AI research side.
World’s biggest startup → short cycles, but everything’s on fire.
Education and healthcare → make this more fair for everyone.
- Product: voice and vision team sits in London / podcasts / daily briefings. AI side: evolving the model: tried to do all at once. then separated. now a disconnect. Need AI PMs connective tissue. and the other way round. product people that fit into it. nando that leads multi-modal needs guidance. need to draw teams together. Also, speak to Mustafa. TAs. heird Chris Schneider as a TA on Security but then help.
:::note] :::
- The Copilot product is in a good place
- The cross-company branding is a mess
- Need to focus on:
- Distribution
- Memory
- Use emerging open protocols (MCP) to take advantage of others’ capabilities (M365 data, etc.)
- Drive the cost of experimentation to zero
- Diversity but not big AI tech intensity
- Look at AI and build the network later
- 3-4 really good firms - he can help in due course
- 3 roles: big tech (GDM obvious fit) (Kohli’s team) - will mention it to Demis in passing (keep an eye on Lila Abraham - COO and hiring), emerging (ElevenLabs (audio), Synthesia (video), )
- GV: more financial oriented (Thinking Machines, SSI) - will invest $1.4bn. Excited to talk to portfolio companies.
- Higher clock speed
- Lots of concept cars. Don’t want to be in charge of concept cars for a big company. Do production cars.
Efekta Preparation
Section titled “Efekta Preparation”- Call with Lee - know what they’re trying to achieve - need to work out the role
- Go through w/ C-level role becomes
- Tool for both the educator and the learner.
- Addi (assistant), Classroom AI, Hyperclass: start of something. Malleable information - personalization doesn’t just react with new content in a set medium, but creates new content that’s best suited for that particular learner.
- Opportunity to build a platform and add subjects (English-only)
- Interesting challenge with strong competition in the direct-to-consumer segment (Duolingo, etc.)
- Leadership team?
- Relationship with EF Group?
- What is the most important hire you’d like to make at Efekta
- Priorities across B2B, B2C, B2Gov?
- What are you most excited about for Efekta over the next 12 months—and what’s keeping you up at night?
- If I joined, where could I have the biggest impact in the first 90 days?
- computer science + phd at cambridge. research lab (oracle research lab)
- invent technology and spin them out using hermann hauser. was little business guidance.
- joined McKinsey to learn about business
- built businesses for them dotcoms rightmove, etc.
- then went into building credit card companies. got recruited by standard chartered to run their cc business in asia biggest in asia.
- learned about leadershipt and marketing but hated banking
- moved to hong kong
- met the owners of EF there.
- bought this awful private university in the US. maybe you could run a business school / university?
- had assumed he was going to build an online school. MBA on a video iPod, but not ready at that time. would spend time but it wasn’t going to be a successful business. built a more traditional one. 16 years → 1 mill to 300 mill in revenue.
- just before covid made a switch to back to tech.
- back in london for 10 years.
- ending the search for a replacement. ef was in online for 30 years way too early to be successful.
- Covid drove adoption and acceptance of online education.
- EF give you all the assets, go build something valuable.
- focus on languages. b2c can be big but not valuable.
- back end of covid: tech that lee had bought to run our own systems - had sold it into high schools in brazil. don’t have to spend money to find teachers or students.
- that is small but highly profitable. if we can build that it will be profitable and valuable.
- sell the technology into public schools around the world.
- built the ai assistant - out there selling this ai instructor for english. in july, will turn it on for 4 million kids. doubled every year. 30 mill in revenue next year w/ line of sight to double every year.
- last year, talked to 30 different countries with 200 mill kids. can grow to 100 mill kids in 3-5 years with 5-20 dollars per student per annum @ 80% margin.
- sao paulo state w/ 3 million students. just asked for physics.
- normally we build businesses focused on europe. but global south. large populations but no teachers but hungry
- lee’s tech has proven scale.
- before the ai instructor - proven success for millions of kids
- alternatives were poor. current solution is massively outperforming current solutions.
- everyon’e’s in the market for better english language education.
- you can transfer knowledge but not develop a skill that requires practice and feedback.
- they get it w/ ai.
- will close rwanda, egypt, bahrain, perhaps.
- education is currently $2.7 trillion worldwide. it will eat through text books and teaching assistants, and ultimately teachers.
- distribution will ultimately define the winners.
- now going to try to raise a couple of hundred million dollars to expand growth. sales teams, then buy them out of current text book deals.
- we’ll give you 2 years free even though you’ve got 5 year contract with mcgraw-hill
- easy to move into other languages, hard to move into STEM.
- combined STEM is 2x english
- legally separated the digital assets from the rest of EF
- online schools will stay with EF.
- eerything we’re raising money against is in its own company. some relationships. rent offices in chelsea, license tech and provide teachers each way. but self-contained.
- business is 100M rev, 30M in schools, 70M in language solutions to corporates. don’t think the latter will grow massively. focus si growing schools business.
- he is ceo.
- about to appoint the CFO from EF. capital markets of a public co.
- lee in charge of product.
- head of sales. experienced b2b sales from EF. not back at marketing / education but good at sales.
- Lee’s tech team: reasonably weak. AI Jesus - young mid 30s, but no clear about what’s coming. CTO will need to be upgraded significantly.
- Chief Academic Officer - world expert
- weaknesses: lee and stephen are recruiting an advisory board. politician jose manual barroso, ai godfather knows where ai is going building on top of not being ridden over, and a software entrepreneur daniel ek
- want to be #1 and not hit the valuation we want.
Who I Am
Section titled “Who I Am”[!Who I Am]
- Intersection of: new technology x business strategy
- Enjoy working on different ambiguous, interesting projects and bringing clarity
- Experience across software engineering, program management, strategy and partnerships.
- Worked as an engineer, business development & strategy, program management
- Designing and nurturing partnerships, bringing technology to market.
- NOW LOOKING FOR AN EXCITING ROLE WITH A MISSION-DRIVEN ORGANIZATION WHERE I CAN WORK AT THE FOREFRONT OF TECHNOLOGY, BUT HELP BRING IT TO MARKET IN A REAL WAY: INVENTION → INNOVATION
What I Can Bring to a Team
Section titled “What I Can Bring to a Team”
- Experience working at the intersection of new technology and business strategy.
- Demonstrated success leading multi-disciplinary, cross-company teams and programs.
- Success in a startup environment AND an understanding of how large companies (Microsoft) work
What I want to Contribute to
Section titled “What I want to Contribute to”
- A mission driven effort that helps improve the world in which we live.
What I Care about
Section titled “What I Care about”
- Honesty, team work, solving for the global maximum
What I Think is Interesting
Section titled “What I Think is Interesting”
- ‘Inhuman’ intelligence
- Focus on utility and identify human tasks that technology can help with.
- 5 second tasks → 5 minutes → 5 hours / days / months
- Memory is key → people will forgo 10 IQ points if only a system remembered you better.
What I Think GDM Can Do Better
Section titled “What I Think GDM Can Do Better”
- Concern about subscription business.
- Concern about impact on core business.
What Do I Think is Most Interesting about GDM’s Current Work?
Section titled “What Do I Think is Most Interesting about GDM’s Current Work?”
- David Silver’s “Era of Experience” paper
- World Models
Why GDM? What’s the Unique Draw for PJW?
Section titled “Why GDM? What’s the Unique Draw for PJW?”
- End-to-end focus on solving intelligence, and delivering is safely and responsibly.
- Demis: understand the world around us and help advance human knowledge.
- Act in the world!
My Point of View
Section titled “My Point of View”About Me
Section titled “About Me”Why AI Product Management Is Different
Section titled “Why AI Product Management Is Different”Why AI Product Management Is Different
Playbook Tip:
- Lead with output quality.
- In AI, the model is both the compiler and the product.
- Ensuring it meets user needs and quality benchmarks is the foundation upon which great UX can then be layered.
AI product management differs from traditional software in three critical ways:
- Probabilistic Outputs: AI generates variable results influenced by training data, prompts, and real-world usage.
- Quality First: If model outputs are poor, no UI/UX polish can compensate.
- Rapid Evolution: AI frameworks, best practices, and models change quickly, requiring constant adaptation.
Traditional PM vs. AI PM
| Traditional Product Management | AI Product Management | |
| Definition of “Good” | Features are defined by a set of functional requirements and deterministic logic. If the feature meets specs, it’s “good.” | Quality is probabilistic; “good” is defined by metrics like accuracy, relevance, clarity, or user satisfaction. Continuous measurement and clear criteria (golden sets, test sets) are essential. |
| Spec & Requirements | Specifications center on predefined features, acceptance criteria, and deterministic logic. Requirements are mostly about how the system should behave under various conditions. | Specs must explicitly define what good looks like through sample prompts, golden sets, and evaluation metrics. AI PMs must provide annotated examples, success benchmarks, and clear criteria for acceptable vs. unacceptable outputs. |
| Empirical Mindset | Validation relies on predefined use cases, acceptance criteria, and manual QA. | Demands a data-driven, experimental approach. Product teams must continuously test, measure output quality, and refine based on real-world feedback and metrics. |
| Core Focus | The UI/UX and workflow design often take precedence. If the feature’s logic is correct, a polished experience is enough. | AI output quality is paramount, overshadowing UI design. A subpar model output can negate even the best-designed interface. |
| Feature Crew Disciplines | Primary collaboration: Product Managers, Engineers, UX Designers, and Copywriters. | Deep collaboration is needed with applied research (for AI model development, prompt engineering, data pipelines) and technical writers (to craft prompts, refine model responses), in addition to classic disciplines (UX, copy, eng). |
| Data Requirements | Mostly static requirements and configurations; data typically is for analytics or minimal business logic. | Robust, high-quality datasets drive output evaluation and improvement. |
| Iteration | Iteration is usually tied to feature roadmaps and version releases; updates are less frequent once feature logic stabilizes. | An ongoing cycle of prompt tuning, model retraining, and evaluation. AI features often see continuous updates as the model and data evolve. |
| Evaluation & Testing | Test cases and QA checklists ensure deterministic outcomes match the specification. | Golden sets, automated metrics, LLM-as-judge pipelines, and human reviews. Success is assessed against empirical benchmarks and user feedback loops. |
| Stakeholder Collaboration | Product, marketing, and user research typically align on messaging once core feature functionality is locked. | Tight cross-functional alignment is critical. Marketing must understand AI’s capabilities and limits; user research must inform ongoing prompt/model refinements. |
| Risk of Failure | Bugs or mismatched features can lead to user frustration, but issues are often binary and more predictable. | AI outputs can fail in subtle ways—incorrect facts, biased or confusing responses. Failures may be less predictable and require robust risk mitigation (e.g., human-in-the-loop evaluations). |
| User Expectations | Consistent functionality once a feature “works.” | Variable output quality; must manage expectations and clarify limitations. |
| Safety & RAI | Privacy & Security requirements focus on data protection, regulatory compliance, and standard code-of-conduct. | Goes beyond privacy/security to include algorithmic bias detection, content moderation, ethical usage guidelines, and frameworks for responsible AI (e.g., fairness, transparency, governance). |