No description
Find a file
2026-03-17 19:18:03 +00:00
Assessed Software Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
cloudron-ai-packaging-proposal.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
cloudron-assessment-agent.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
cloudron-packaging-reference.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
cloudron-scorer.html Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
cloudron-toolkit-flow.svg Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
example-assessment-facilmap.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
forum-post-assessment-tool.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
forum-post-draft.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
FOUNDATION-PROMPT-CLOUDRON-PACKAGING.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00
README.md Cloudron Packaging Assessment Toolkit 2026-03-17 19:18:03 +00:00

Cloudron Packaging Assessment Toolkit

A set of tools to help the Cloudron community evaluate, prioritise, and package applications. Built by a community member, not affiliated with Cloudron staff.

What is in this toolkit

1. Packaging Assessment Agent (for Claude AI)

A system prompt that turns Claude into a structured Cloudron packaging assessor. Give it a GitHub URL, and it produces a detailed report covering:

  • Structural difficulty (processes, databases, runtime, broker, filesystem, auth): how hard is the initial packaging?
  • Compliance and maintenance cost (SSO quality, upstream stability, backup complexity, platform fit, config drift): how hard is it to keep it working as a good Cloudron citizen?
  • Specific evidence for every score
  • Key risks identified
  • Recommended packaging approach
  • What still needs manual investigation

How to set it up:

  1. Go to claude.ai and create a new Project (requires Claude Pro or Team)
  2. Open the project settings and paste the contents of cloudron-assessment-agent.md into the Project Instructions field
  3. Optionally upload cloudron-packaging-reference.md as Project Knowledge for extra context about the base image and packaging patterns
  4. Start a new conversation inside that project

How to use it:

Type something like:

Assess this app for Cloudron packaging: https://github.com/FacilMap/facilmap

The agent will:

  • Fetch and read the repo's README, docker-compose.yml, Dockerfile, and package manifests
  • Search for SSO/LDAP/OIDC documentation
  • Check the release history for stability
  • Score 11 sub-axes across two dimensions
  • Produce a structured markdown report

The report is designed to be posted directly as a forum reply on a wishlist thread.

Limitations:

  • The agent cannot test anything at runtime. It reads code and docs, not running containers.
  • SSO integration, filesystem write paths, and WebSocket behaviour all need manual verification on a live Cloudron instance.
  • Confidence depends on what evidence is available. A well-documented app with a compose file gets a high-confidence assessment. An undocumented alpha project gets a low-confidence one.
  • It tends to be slightly optimistic on structural scores and cannot predict upstream behaviour (licensing changes, breaking updates).

Example output: See example-assessment-facilmap.md for a complete report on FacilMap.


2. Interactive Packaging Assessment Tool (HTML)

A self-contained HTML file (cloudron-scorer.html, ~40 KB) with four tabs:

Score an app — Interactive scorer with six structural axes. Select options, get a difficulty tier with colour coding.

Pre-scored apps — Gallery of ~40 apps from the forum wishlist, each with scores, tier, and expandable breakdown. Filterable by difficulty tier. Early-stage projects are tagged with a grey "early stage" label.

GitHub lookup — Enter a GitHub URL. The tool fetches the repo via the GitHub API (runs in your browser, no server needed), scans key files for database and broker references, detects runtimes, and gives a colour-coded difficulty estimate. Note: GitHub allows 60 unauthenticated API requests per hour.

How to use — Step-by-step guide for manually assessing any app.

How to access it: Open the HTML file directly in any browser (works offline), or host it on a Surfer instance for public access.


3. Packaging Reference Document

cloudron-packaging-reference.md — a verified reference covering:

  • The actual contents of cloudron/base:5.0.0 (inventoried from a live Cloudron 9.1.3 container, correcting several documentation inaccuracies)
  • The runtime contract (filesystem rules, addons, environment variables, reverse proxy)
  • Template files (Dockerfile, start.sh, CloudronManifest.json, nginx.conf)
  • Process management patterns (single process, Nginx + backend, multiple workers)
  • The message broker problem (Redis as broker vs LavinMQ for AMQP)
  • Python, Node.js, and PHP packaging patterns
  • Upgrade handling, SQLite backup, logging, debugging
  • A full packaging checklist

This is useful both as a reference for human packagers and as context for AI-assisted packaging.


4. Proposal Document

cloudron-ai-packaging-proposal.md — analysis of where packaging time actually goes (based on forum evidence, including girish's "40 hours per app" figure), what AI can and cannot do today, and a phased proposal for tooling that could help. Background reading, not a tool.


Files

File Type Purpose
cloudron-assessment-agent.md Claude Project instructions The assessment agent. Paste into a Claude Project.
cloudron-packaging-reference.md Markdown reference Base image inventory, patterns, checklist.
cloudron-scorer.html Self-contained HTML Interactive scorer, app gallery, GitHub lookup.
example-assessment-facilmap.md Markdown report Sample agent output for FacilMap.
cloudron-ai-packaging-proposal.md Markdown document Strategic analysis and phased proposal.
forum-post-draft.md Markdown draft Draft forum post for the AI packaging thread.

How this was built

The toolkit emerged from a multi-day conversation exploring Cloudron packaging from first principles. Key steps included:

  • Running a comprehensive inventory inside a live Cloudron 9.1.3 container to verify what is actually in the base image (correcting several assumptions from documentation)
  • Researching the Cloudron forum for real packaging experiences, maintainer statements about effort, and failure cases (Jitsi networking issues, Stirling-PDF SSO licensing, Nextcloud upgrade breakages, docassemble multi-week packaging)
  • Building and iterating the scoring system from a simple four-axis model to an eleven-axis two-dimensional framework
  • Identifying that the initial packaging (30% of effort) is already being accelerated by AI, but the compliance and maintenance work (70% of effort) is the real bottleneck

Contributing

This is a community effort. If you use the assessment agent and find the scores are wrong for an app you have actually packaged, that feedback is invaluable. Please share corrections on the forum.

The pre-scored apps in the HTML tool should be updated as apps get packaged or as new wishlist entries appear. The HTML file is a single self-contained file that can be edited in any text editor.


March 2026. Cloudron 9.1.3, base image 5.0.0.