Want Higher Platform Adoption? Write Better Docs (for Robots)
Or: A Human's Guide to Teaching Robots How to Google

We're moving into a world where agentic code tools are driving prototypes and assisting human developers. As models rapidly advance (just ask any Claude Code user), the next generation of software will largely be built by agents guided by human architects, developers, and analysts. We're not quite there yet. Reaching this future means re-thinking how we deliver platforms and frameworks. Similar to e-commerce, where detailed and accurate product data is crucial for discovery, agentic platforms need improved documentation. They'll also need to offer tools like MCP servers to boost adoption.
We're moving into a world where agentic code tools are driving prototypes and assisting human developers. As models rapidly advance (just ask any Claude Code user), the next generation of software will largely be built by agents guided by human architects, developers, and analysts.
We're not quite there yet. Reaching this future means re-thinking how we deliver platforms and frameworks. Similar to e-commerce, where detailed and accurate product data is crucial for discovery, agentic platforms need improved documentation. They'll also need to offer tools like MCP servers to boost adoption.How I Worked Around Google Vertex's Poor Documentation
Here’s a personal experience: This week, I tackled a hobby project, an application to generate Bible study guides. I also aimed to explore Google’s Vertex AI for hosting, monitoring, and managing agents.
As this was a learning project, I opted for Claude Code over my usual Cursor environment. I quickly prototyped the application, which successfully generated output from input. Success!
Deployment, however, was a nightmare. Claude, ChatGPT, and Gemini all struggled for hours, making repeated errors due to outdated documentation, misunderstood API interfaces, and persistent troubleshooting loops caused by unhelpful logging and feedback from Google Vertex.
I’m Lazy, so I Doubled Down
A more disciplined developer would have realized this required human intervention. But my lazier self wondered, "How can I empower the robots to complete this task for me?"
The solution: better-researched and validated information. I tasked Claude and Gemini with "Deep Research" projects to document current best practices and the most up-to-date APIs for building, deploying, and testing my agent.
Next, I had Claude validate and compile a single document from their findings, which Claude Code could then use. The result was incredibly thorough and impressive.
Then, I assigned Claude Code the deployment task, starting it in Planning Mode. This generated a detailed execution plan from the documentation and the prototype. Claude Code's Planning Mode is a game-changer, breaking down complex implementations into reviewable steps. Recently, Claude also introduced the option for Max plan users to leverage Opus for complex planning and the more cost-effective Sonnet for execution.
The results were mixed.
The Good:
- Claude Code effectively followed its implementation plan, even scripting clunky GCP configuration changes (like IAM, a definite pain point).
- It handled most of the heavy lifting for deployment scripts and test harnesses, validating each step.
- This iteration had significantly fewer mistakes, dead ends, and workarounds. The combination of Deep Research (more compute spent on correct APIs/docs) and Planning Mode (sequential plan) made a huge difference.
- I could also extract Claude's adjustments in a format usable by my documentation-writing agent, ensuring a better starting point for future similar tasks.
- Ultimately, it worked!
The Bad:
- Google's documentation is subpar. While many guides exist, they omit crucial details and recommend incorrect API usage. Claude eventually figured out the correct approach through error messages and re-examining documentation, but the Vertex AI docs were clearly deficient.
- Vertex AI's deployment error messages were extremely difficult to interpret. They were excessive, pointed to internal exceptions, and lacked descriptive clarity. Even the in-GCP agent couldn't effectively diagnose the errors.
I Was Promised Flying Cars and Jet Packs
It's not unreasonable to expect that platform providers, aiming for easier adoption, faster development, and greater value for their users, should enable straightforward use cases to be built, deployed, and tested via tools like Claude Code, almost from a single prompt. At minimum, these tools should be able to navigate error messages and feedback to reach an MVP with minimal time and tokens.
E-Commerce is Leading the Way
Fortunately, some e-commerce platforms are setting the standard. commercetools was among the first to offer both an AI agent for its documentation and an MCP server. Shopify provides an in-application agent for task-related questions and an MCP server for its documentation.
These MCP servers are crucial. They allow LLMs like Claude Code or Cursor, or even basic LLM chat interfaces, to directly query vendors on how to accomplish tasks. Assuming correct implementation, this should drastically reduce mistakes, hallucinations, and troubleshooting time.
If You Want Higher Platform Adoption, Improve Your Docs
At the end of the day, vibe coding and even more professional agentic development will continue, even if the current flavor of that is LESS productive then without AI tools. Your mismash of documentation that is full of code examples using outdated API’s, inconsistent naming conventions, and incomplete instructions is making things harder on humans and agents.
If you’re interested in building and deploying Vertex AI agents and want an easy-to-use guide for your new robot overlords, check out this document here.
Let's Work Together
Interested in digital transformation, strategic advisory, or technology leadership? I'd love to connect and discuss how we can work together.
Get In Touch