Building Jerna Digital: 100% AI-Agentic Development
Published on February 12, 2026
The Challenge
Every consulting business needs a portfolio site. But as a technical consultant who helps startups leverage AI, the site itself needed to be the proof of concept. The challenge: build a production-grade, multilingual, accessible portfolio website using 100% AI-agentic development — no hand-written code, every line generated and committed by AI agents.
This wasn’t just about shipping a site. It was about proving that AI agents can handle real software engineering workflows end-to-end: architecture decisions, implementation, testing, design iteration, and deployment. The entire repository is open source on GitHub — every commit, every PR, every decision record is available for inspection.
The Workflow Engine: Claude Code Skills
The backbone of this project is a set of custom Claude Code skills — reusable slash commands stored in .claude/skills/ that encode the team’s development process into repeatable workflows. Three skills drive the entire lifecycle:
/create-issue — Structured Issue Creation
Instead of writing free-form issues, this skill turns a simple prompt like /create-issue feature Add dark mode toggle into a structured workflow:
- Asks targeted clarifying questions based on the issue type (features, bugs, hotfixes, refactors, etc.)
- Ensures GitHub labels exist and are consistent
- Generates a structured issue body from templates with acceptance criteria
- Creates the issue with proper labels automatically
Every issue in this repository was created through this skill, ensuring consistent structure and complete requirements from the start.
/start-issue — Plan-First Development
This is where the most interesting AI-agentic behavior happens. When given an issue number, the skill enforces a mandatory plan-first workflow:
- Setup: Fetches the issue, assigns it, determines the branch type from labels, and creates a properly named feature branch
- Planning (mandatory): Enters a read-only exploration mode where the AI analyzes the codebase, understands existing patterns, and writes a detailed implementation plan — all before a single line of code is written
- Human review: The plan is presented for approval. No code is written until the human signs off
- Implementation: Only after approval does the AI implement the changes, creating atomic commits with conventional commit messages
- PR creation: Creates a draft PR, runs validation, then marks it ready for review
This plan-first approach is critical. It means the AI doesn’t just start writing code based on a vague understanding — it demonstrates its understanding first, and the human can course-correct before any investment is made.
/make-release — Automated Releases
Once a PR is reviewed and ready, the release skill handles the rest:
- Validates the PR is mergeable and not still in draft
- Asks for the release type (patch, minor, major) and calculates the next semver version
- Merges with squash, generates a changelog from commits, and creates a GitHub release with tags
- Closes the related issue with a link to the release
- Optionally cleans up the feature branch
This turns what would be a 10-step manual process into a single command with human checkpoints at the right moments.
The Development Lifecycle in Practice
Here’s what a typical feature looks like in this project, from idea to production:
1. Issue creation — /create-issue feature Add internationalization support generates a structured issue with clarifying questions and acceptance criteria.
2. Branch & plan — /start-issue 4 creates feature/4-i18n-implementation, enters plan mode, explores the codebase, and proposes an architecture with ADRs. The human reviews and approves.
3. Implementation — The AI implements the approved plan in atomic commits: feat(i18n): add translation system, feat(i18n): add language switcher, test(i18n): add E2E tests. Each commit passes pre-commit hooks (lint, format, typecheck, build) automatically.
4. Quality gates — Pre-push hooks run Chromium E2E tests locally. CI runs a 3-tier strategy: draft PRs get fast lint-only checks, ready PRs get Chromium E2E, and merges to main get full cross-browser E2E testing.
5. Release — /make-release 5 merges the PR, creates v0.3.0, generates the changelog, closes the issue, and cleans up the branch.
The human’s role throughout? Setting direction, reviewing plans, approving releases. The AI handles everything else.
Documentation as Context
One of the most powerful patterns in this project is documentation as AI context. The CLAUDE.md file at the project root serves as persistent instructions for the AI agent — coding conventions, project structure, common tasks, architectural decisions. Every time the AI starts a new session, it reads this file and picks up where it left off with full context.
Architecture Decision Records (ADRs) in docs/decisions/ document the why behind technical choices. When the AI needs to make a new decision, it can reference existing ADRs to stay consistent. When a human reviews a plan, the ADRs provide the reasoning trail.
This creates a flywheel effect: the more the AI documents, the better context future sessions have, leading to more consistent and faster development.
Results
- 100% AI-authored code — Every line, every commit, generated by AI agents
- Complete development lifecycle — Issues, branches, PRs, releases, all managed through custom skills
- Production-grade quality — Comprehensive E2E tests, strict TypeScript, automated quality gates
- Fully open source — The entire repository is available for inspection, including all skills, ADRs, and commit history
Key Takeaways
- Skills turn AI agents into team members — By encoding your development process into reusable commands, AI agents don’t just write code; they follow your team’s workflow
- Plan-first development prevents waste — Forcing the AI to explain its approach before writing code catches misunderstandings early and builds trust
- Quality comes from process, not authorship — Pre-commit hooks, CI pipelines, and type safety work regardless of who (or what) writes the code
- Documentation is a force multiplier —
CLAUDE.mdand ADRs give the AI agent context that makes each session faster and more consistent than the last - The human role shifts to direction and review — The most effective pattern is: human sets intent, AI proposes a plan, human approves, AI implements, automated gates verify